Note
This is a daily beta build (2026-05-15). It contains the latest fixes and improvements but may have undiscovered issues.
Docker users: Update by pulling the new image:
docker pull ghcr.io/maziggy/bambuddy:daily
or
docker pull maziggy/bambuddy:daily
**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.
Changed
- Support bundle audited for new features — adds OIDC, 2FA, API keys, library/inventory/queue/maintenance totals, slicer-API reachability, GitHub backup status, per-printer Obico flag; also redacts two settings that were leaking and fixes a reachability-check architecture bug — The
support-info.jsonblock in support bundles auto-includes thesettingstable (with sensitive-key redaction), so settings-stored features like LDAP, Obico globals, integrated slicing URLs, Tailscale, and queue-drying already flowed through. What was missing was anything stored in dedicated tables, which had grown substantially without the bundle being updated. Triaging the recent OIDC / 2FA / group bugs (#1292, #1297) and the X1C slicer investigation involved repeatedly asking reporters for information that should have been in the bundle. New blocks added to_collect_support_infoinbackend/app/api/routes/support.py:auth— OIDC providers (cleartextname,is_enabled,scopes,email_claim,require_email_verified,auto_create_users,auto_link_existing_accounts,has_default_group,has_icon,linked_user_count;client_id/client_secret/issuer_urlstay out of the bundle), 2FA counts (users_with_totp,email_otp_codes_pending), API key counts (total/enabled/expired), long-lived token counts (total/active), group counts (system/custom).library—library_files_total,library_files_in_trash,library_folders_total,external_folders_total,external_links_total,makerworld_imports_total.inventory—spools_internal,k_profiles_internal,k_profiles_spoolman.queue—pending_total,manual_start_pending,oldest_pending_age_seconds(catches items stuck because their target printer is offline or filament doesn't match).maintenance—items_total,items_enabled.integrations.github_backup—configs_total,providers_useddict (github/gitea/forgejo/gitlab),schedule_enabled_count,last_failure_count.integrations.slicer_api—enabled,preferred,bambu_studio_url_set,orcaslicer_url_set, plus an actual 2-second HTTP reachability ping (bambu_studio_reachable,orcaslicer_reachable) to differentiate "URL empty" from "URL misconfigured" from "service down". Per-printerobico_enabledflag added to each entry inprinters[], parsed fromobico_enabled_printerssetting via a new_parse_obico_enabled_printershelper that tolerates legacy comma-separated formats. Plus three smaller but important fixes caught while testing the bundle against a real instance: (1)mqtt_brokervalue was leaking — the keyword-substring redaction filter atsupport.py:850had no entry that matched themqtt_brokersetting name, so the broker IP (e.g.192.168.255.16) was appearing in cleartext. Addedbrokertosensitive_keys. (2)virtual_printer_tailscale_auth_keywas leaking — same reason, no keyword in the filter matched_auth_key. Addedauth_keyto the keyword set, AND added a value-prefix safety net (tskey-) so any FUTURE Tailscale setting with an unexpected name still auto-redacts when its value starts with the Tailscale auth-key prefix. (3) Slicer-API reachability check was always returningnull/falseeven when the slicer was up — two root causes stacked. First, the old code passedinfo["settings"](already redacted) into_collect_slicer_api_info, so whenbambu_studio_api_urlhad been redacted to"[REDACTED]", the httpx call hit that literal string and crashed; when the setting was empty, the URL came through as""and the function returnedNone. Second — caught on the next round of testing — even after switching to read directly fromSettings.value, the check only looked at the DB row, but the real slicer routes (archives.py:3174-3180,library.py) resolve the URL with a three-level precedence: DB setting →app_settings.bambu_studio_api_url(which reads theBAMBU_STUDIO_API_URLenv var) → built-in defaulthttp://localhost:3001. Most installations run the sidecar on the default port or via env var, so the DB-only check returnednulleven when the slicer was up and reachable. The collector now mirrors the route's exact resolution path. The block now also reportsbambu_studio_url_set_in_db: boolandbambu_studio_url_source: "db" | "env_or_default" | "unset"so triage can see WHICH layer supplied the URL — separates "user explicitly configured it" from "they're using the default port" without leaking the URL itself. Two regression tests pin both layers:test_reachability_uses_unredacted_url(no"[REDACTED]"ever reaches_check_url_reachable) andtest_env_var_fallback_url_pinged_when_db_setting_empty(DB empty + env-var-set URL is actually pinged and reported reachable). All new collectors are wrapped intry/exceptso a single failure on one block can't blank the rest of the bundle. OIDC provider names are passed in cleartext deliberately — they're login-button labels (PocketID,Authentik,Google, etc.), not secrets, and provider-specific behavior (Azure handles claims differently from Authentik) is exactly the kind of detail that makes SSO bugs triagable in one round-trip instead of three. 13 new unit tests inbackend/tests/unit/test_support_helpers.pycover the obico-parser edge cases, slicer-API reachability with mocked httpx (including the "404 = reachable" decision, the un-redacted-URL regression, AND the env-var-fallback regression), auth-info OIDC-cleartext-but-no-secrets contract, the GitHub-backup provider/failure aggregation, and the newmqtt_broker/virtual_printer_tailscale_auth_key/ value-prefix-based redactions. - Page headers unified across the app: consistent icon size, placement, and subtitle styling (PR #1272 by EdwardChamberlain, continuation of #1060 / #1203) — Nine pages (Archives, FileManager, Inventory, Maintenance, MakerWorld, Profiles, Projects, Settings, Stats) now share one header pattern:
w-7 h-7 bambu-green iconnext to atext-2xl font-boldtitle with atext-bambu-gray mt-1subtitle underneath, matching the look that landed earlier on Print Queue and Printers. FileManager and Projects dropped their roundedbg-bambu-green/10 rounded-xl p-2.5icon tile in favor of the plain icon to match the rest. The sidebar's "Queue" nav item is renamed to "Print Queue" (and its icon switched fromCalendartoListOrdered) to match the page header it leads to. The Stats page title is renamedDashboard → Statisticsto match the sidebar nav label that's been pointing at it (the page never was the printer dashboard — Printers is — and the mismatch confused new users; closes a small but recurring source of "where's the dashboard?" support questions). All renames flow through every locale: en/de/fr/it/ja/pt-BR/zh-CN/zh-TW updated fornav.queue,stats.title, plus a newinventory.subtitlekey ("Manage your spools" + translations) used by the inventory header. Bonus on top of the stated scope:inventory.toolbar.{filters, view, actions}were untranslated English strings in fr/it/ja/pt-BR/zh-CN/zh-TW — Edward translated them properly in the same pass.StatsPage.test.tsxupdated to assert the new "Statistics" title. Build clean, all 35 page tests still pass, i18n parity holds at 4753 leaves across all 8 locales. Maintenance page subtitle keeps its red / amber / green severity color on the "X items due · Y warnings · all up to date" line — the colors carry actual at-a-glance status information, not just visual weight. - Bambuddy now identifies honestly as itself on every outbound request to Bambu Lab / MakerWorld / Bambu Wiki — proactive alignment with Bambu Lab's 2026-05-12 statement on cloud access, which draws a clear line between modifying AGPL code (allowed) and "impersonating official clients in communication with our cloud infrastructure" (not allowed). Bambuddy was already on the right side of that line on the main authenticated cloud path (
User-Agent: Bambuddy/1.0inbambu_cloud.py:_get_headers), but three secondary call sites were sending browser User-Agents — originally added under the assumption Cloudflare's WAF would block non-browser identification. Tested on 2026-05-12 withcurl -H "User-Agent: Bambuddy/1.0"against all three:https://bambulab.com/api/sign-in/tfareturned HTTP 400 with the expected application-level{"code":5,"error":"Login failed"}JSON (no Cloudflare interstitial),https://api.bambulab.com/v1/iot-service/api/slicer/settingreturned HTTP 200 with the full 576 KB settings response,https://makerworld.com/api/v1/design-service/*returned the same response shape as a Firefox UA, andhttps://wiki.bambulab.com/*served identical HTML to a Chrome UA. The browser-impersonation was unnecessary. All four call sites now sendBambuddy/1.0 (+https://github.com/maziggy/bambuddy)consistently — the URL in parens makes the source unambiguous so Bambu can distinguish our traffic from impersonators if they ever audit it. Files:bambu_cloud.py(TOTP/TFA path no longer spoofs Chrome UA + Origin + Referer + Accept-Language headers — Origin/Referer were spoofingbambulab.comorigin, which the new comment block specifically calls out as removed),makerworld.py(Firefox UA replaced; the Referer header is kept because MakerWorld's CSRF / origin-check middleware uses it on some endpoints, which is functional, not identity-faking),firmware_check.py(Chrome UA on the public wiki scraper replaced — wiki has no special handling for our UA). Separately: the/v1/iot-service/api/slicer/settingendpoint requires aversionquery parameter in Bambu Studio's XX.YY.ZZ.WW format (the API returns HTTP 400 "field 'version' is not set" without it, and HTTP 422 "Invalid input parameters" for non-matching formats likebambuddy-1.0), but Bambu's server accepts ANY value within that format — verified the same 576 KB response withversion=99.99.99.99. The previous default"02.04.00.70"is an actual Bambu Studio release version (2.4.0.70). The default is now"1.0.0.0"(held in a new_SLICER_API_VERSIONmodule constant inbambu_cloud.pyand re-exported intoroutes/cloud.pyso the two route defaults stay in sync), which satisfies the format requirement without claiming to be a specific Bambu Studio build. Unchanged on purpose:version="2.0.0.0"parameters increate_setting/update_settingpayloads are the preset's format version (extracted fromcurrent.get("version", "2.0.0.0")for updates, line 443) — they describe the preset schema, not the client, and stay as-is. Two regression tests rewritten to lock in the new behavior:test_verify_totp_uses_honest_bambuddy_user_agent(wastest_verify_totp_includes_browser_headers— asserts UA starts withBambuddy/, assertsMozilla/Chrome/Origin/Refererare not present) andtest_sends_honest_bambuddy_user_agent(wastest_sends_browser_like_headers— same shape, plus continues to assert the deprecatedx-bbl-*Bambu-app identification headers are still gone). All 4598 backend tests pass. - Spoolman weight tracking now uses per-print grams for all spools, matching the internal Filament Inventory (#1119, reported by Moskito99) — Spoolman previously had two mutually-exclusive weight paths: AMS remain%×tray_weight auto-sync (default; only worked for Bambu Lab spools with valid RFID tray_weight) and per-print 3MF-grams tracking (only enabled when "Disable AMS Weight Sync" was toggled on). Non-BL spools without RFID fell through both paths — AMS auto-sync had no tray_weight to multiply, and the inventory_remaining fallback was wiped because activating Spoolman deletes the internal
spool_assignmenttable — so Spoolman never saw a weight update for them. The internal Filament Inventory has no such gap: it always uses per-print 3MF grams as the primary path with AMS-remain% delta as fallback, and it works for every spool type. Spoolman now does the same: per-print tracking runs whenever Spoolman is enabled and is the only writer ofremaining_weight. AMS auto-sync continues to maintain spool metadata and slot assignments but no longer touches weight (eliminating the double-count that would otherwise occur for BL spools with both paths active).store_print_data(spoolman_tracking.py:159) had itsdisable_weight_syncearly-return removed; the threesync_ams_traycallsites (main.py:1450auto-sync,spoolman.py:318per-printer manual,spoolman.py:517sync-all) now hard-codedisable_weight_sync=True. Thespoolman_disable_weight_syncsetting is now deprecated and a no-op — kept in the DB/UI for backwards compat. Behavioral consequence for existing users on the default flag (False): live AMS-based remaining_weight updates between prints stop happening; weight updates now arrive once per print completion with 3MF gram precision. Regression test intest_spoolman_tracking.py::test_stores_tracking_when_disable_weight_sync_is_falseproves the early-return is gone.
Added
- Manual LDAP user provisioning from the UI (#1298, reported by Fuechslein) — Until now the only way to onboard an LDAP user was to leave
Auto-provisionon and have them log in once, because the create-user form had no LDAP awareness — admins who wanted to disable auto-provision had to hand-edit the database to create the row. The user-create modal now grows aLocal / LDAPtab toggle (visible only when LDAP is enabled in settings, so non-LDAP installs see no UI change). The LDAP tab is a directory search: type ≥2 characters and the newGET /auth/ldap/searchendpoint uses the service-account bind to query the directory with a fixed OR filter acrosssAMAccountName,uid,mail,displayName, andcn(covering both Active Directory and OpenLDAP layouts; user input is RFC-4515 escaped so a typed*doesn't enumerate the whole tree). Each result is annotated withalready_provisionedso usernames that already exist as BamBuddy users render dimmed and disabled. Picking a result and clicking Provision user hitsPOST /auth/ldap/provision, which re-resolves the username via the service bind (rather than trusting the client payload) and calls the same_provision_ldap_userhelper the auto-provision login path uses — so group mapping, default-group fallback, and email sync behave identically regardless of which path created the user. Distinct error responses cover the failure modes (400LDAP disabled / query too short,404directory miss,409username already exists locally vs. already-provisioned LDAP user,503directory unreachable with the underlying ldap3 exception class + message in the detail field so the operator can diagnose without reading backend logs). Backend refactor extracts_open_service_connection+_extract_user_infohelpers inbackend/app/services/ldap_service.pyso the newlookup_ldap_userand existingauthenticate_ldap_usershare the bind + attribute-extraction paths (POSIXmemberUid+ primarygidNumber+ case-insensitive DN dedupe stay in one place). Two ldap3 schema-check workarounds for OpenLDAP installs (caught in user testing against an OpenLDAP directory): (1) the directory-search connection is opened withcheck_names=Falsebecause ldap3's client-side filter validation rejects the AD-onlysAMAccountName/displayNamenames in the cross-schema OR filter before any packet is sent; (2) the search requestsattributes=["*"](all user attributes) rather than the explicit AD-flavoured name list, because ldap3'sbuild_attribute_selectionvalidates each named attribute against the server schema independently ofcheck_namesand only the*wildcard is in its hard-codedATTRIBUTES_EXCLUDED_FROM_CHECKexclusion list — so a list like["sAMAccountName", "uid", ...]still throwsLDAPAttributeErroron OpenLDAP. The login/lookup paths (authenticate_ldap_user,lookup_ldap_user) keepcheck_names=Trueso typos in the configureduser_filtersetting still fail loudly. New shared frontend component<LdapUserPicker>infrontend/src/components/LdapUserPicker.tsxhandles the debounced search (300 ms, min 2 chars), result list, selection, and provision mutation; it's rendered from all four create-user modal paths — basic + advanced-auth inUsersPage.tsx, basic + advanced-auth inSettingsPage.tsx(the latter being the "Add User" inside Settings → Authentication, which uses a separate modal flow from the dedicated Users page) — and the sharedCreateUserAdvancedAuthModalgains aldapEnabled+onLdapProvisionedprop pair so both pages drive the same component. i18n: 14 new keys underusers.modal.ldap*+users.modal.{localTab,ldapTab,tabsAriaLabel}+ 1 toast key infrontend/src/i18n/locales/en.ts(other 7 locales fall back to English per project convention). The wiki atfeatures/authentication.mdwas also corrected — the prior "When disabled, an admin must pre-create the user in BamBuddy" line was misleading (no UI path existed) and now describes the new search-and-provision flow. Regression tests: 14 unit tests inbackend/tests/unit/services/test_ldap_service.pycover the filter shape, wildcard escaping, username-canonical fallbacks (sAMAccountName → uid → cn), bind-failure propagation, the no-password-bind contract oflookup_ldap_user, and pin both ldap3 schema-check workarounds (check_names=Falseon the search connection +attributes=["*"]so OpenLDAP doesn't reject the request). 12 integration tests inbackend/tests/integration/test_ldap_provision.pycover auth gating, short-query rejection, LDAP-disabled rejection,already_provisionedannotation, the 4xx/5xx error matrix, and a happy-path provision that verifiesauth_source=ldap,password_hash=None, and group-mapping inheritance from the auto-provision path. 5 frontend tests inLdapUserPicker.test.tsxcover the debounce, the search → select → provision flow, already-provisioned rows rendering disabled, and surfaced provision errors. 65 LDAP-related backend tests + 5 picker tests pass; full backend ruff clean; frontend build clean. - Slice modal: pick the build plate (#1337, reported by digitalskies) — Slicing a plain STL through the integrated slicer always defaulted to whatever
curr_bed_typelived in the chosen process preset (typicallyCool Plate), which the slicer CLI then rejected for high-temp filaments withPlate 1: Cool Plate does not support filament 1. The user had no way to switch plates short of cloning the process preset in BambuStudio, which defeats the point of the in-app slicer. The Slice modal now exposes aBuild platedropdown with the six canonical BambuStudio / OrcaSlicer plates (Cool Plate, Cool Plate SuperTack, Engineering Plate, High Temp Plate, Textured PEI Plate, Smooth PEI Plate) plus an explicitAuto (use process preset)option that preserves the previous behavior. The dropdown sits between Process profile and Filament rows so it stays visible regardless of how many filament slots the picked plate uses (a long filament list would otherwise push it off the modal'smax-h-[85vh]scroll viewport) and is always enabled — including when the user picks a Printer Preset Bundle from the top BundlePicker. When the user picks a specific plate, the newbed_typefield onSliceRequest(backend/app/schemas/slicer.py) flows through the dispatcher via two paths: (1) resolved-preset path — the route helper_patch_process_bed_typeinbackend/app/api/routes/library.pyoverwritescurr_bed_typeon the resolved process JSON before forwarding to the sidecar (no preset cloning required); (2) bundle dispatch path —slice_with_bundleinbackend/app/services/slicer_api.pyadds abedTypeform field to the sidecar multipart so the sidecar can pass--curr_bed_typethrough to the CLI, which lets the override take effect even though Bambuddy can't patch the bundle's process JSON locally (the sidecar materialises it from the stored .bbscfg). Sidecar versions that don't recognise the field silently no-op — the slice still runs, just with the bundle's default plate; the slicer-API fork at maziggy/orca-slicer-api will need the matching change for the bundle path to take full effect. i18n parity: 8 new keys (slice.bedType.{label,auto,coolPlate,coolPlateSuperTack,engineering,highTemp,texturedPEI,smoothPEI}) added to all 8 locales — full German translation, English fallbacks elsewhere per project convention. Regression tests: 4 intest_slice_request_bed_type.py(bed_typedefaults to None, accepts the six canonical strings, rejects overlong input via the schema'smax_length=64;_patch_process_bed_typeoverwrites an existing value, adds the field when missing, and returns the input unchanged for malformed JSON or non-dict roots), 4 intest_library_slice_api.py(resolved-preset path: withbed_typeset, the sidecar receives"curr_bed_type": "Textured PEI Plate"in the presetProfile multipart part; without it,curr_bed_typestays out of the body entirely. bundle dispatch path:bedTypeform field carries the override through to the sidecar; omittingbed_typekeeps the form field out of the request so the bundle's owncurr_bed_typeis preserved), 2 inSliceModal.test.tsx(dropdown selection putsbed_typeon the request; leaving it on Auto omits the field). 59 backend slice tests + 34 SliceModal tests pass; build and i18n parity script clean.
Fixed
- Plate-detection calibration captured the wrong camera when an external camera was configured (#1359, reported by Andlar94) — On the reporter's A1 with an external RTSP / go2rtc camera enabled, every print start raised "Build plate not empty" no matter how perfectly they calibrated. Root cause: the runtime auto-check at print start in
backend/app/main.py:1819calledcheck_plate_empty(..., use_external=printer.external_camera_enabled, ...)— honouring the external camera setting. The manual UI check + calibration routes inbackend/app/api/routes/camera.pydeclareduse_external: bool = False, and the frontend client atfrontend/src/api/client.tsalways sentuse_external=falseexplicitly (the UI call sites inPrintersPage.tsxnever passeduseExternal). So calibration captured a frame from the built-in chamber camera and saved it as the reference; the runtime auto-check captured a frame from the external camera and diffed it against that built-in reference — a permanent difference well above any sane threshold, hence "not empty" on every print. Fix: the two routes now useuse_external: bool | None = None, and after the printer row is loaded they derive the default asbool(printer.external_camera_enabled and printer.external_camera_url and printer.external_camera_type)— identical to the runtime path's logic and the service-layer gate atplate_detection.py:605. Centralising the default on the backend means any current or future caller automatically gets the right camera without having to remember the flag. The frontend client now only forwardsuse_externalwhen the caller explicitly sets it (default omitted → backend decides), so the existing UI buttons immediately benefit. Power-user override path stays open: passing?use_external=falseon a printer with an external camera still wins, so anyone who deliberately wants a built-in-camera reference can still get one. Regression tests inbackend/tests/integration/test_camera_api.py:test_check_plate_defaults_use_external_when_external_camera_enabledandtest_calibrate_plate_defaults_use_external_when_external_camera_enabledpin the new default for a printer with external camera + URL + type set;test_check_plate_defaults_use_external_false_when_external_camera_disabledpins the built-in default for the no-external-camera case (the common path stays untouched);test_calibrate_plate_explicit_use_external_false_overrides_defaultpins the explicit-override escape hatch. All 11 plate-tagged camera integration tests pass; ruff clean; frontend build clean. - API Keys page now exposes a narrowly-scoped "Update electricity price" toggle so the Home Assistant dynamic-tariff integration actually works (#1356, reported by maziggy) — The reporter followed the Energy Tracking wiki page literally — "create a key with Write Settings permission, then PATCH
/api/v1/settingswith{energy_cost_per_kwh: ...}" — and hit{"detail":"API keys cannot be used for administrative operations"}. Triage showed three independent drifts: (1) the wiki listed nine fictional permissions ("Read Printers / Write Settings / Admin / …") but the actual UI inSettingsPage.tsx:3683-3744only ever exposed four toggles (Read Status, Manage Queue, Control Printer, Allow Cloud Access). There was no Write Settings toggle to tick. (2) Even if the UI had exposed it, the backend hard-deniesPermission.SETTINGS_UPDATEfor every API key via_APIKEY_DENIED_PERMISSIONSinbackend/app/core/auth.py— intentional protection becausePATCH /settingscan rewrite SMTP/LDAP/MQTT credentials and the HA access token, which would silently widen attack surface beyond what any documented use case needs. (3) So the wiki had been promising a workflow that was never deliverable. Fix: introduce a narrowly-scoped door for exactly the documented use case rather than relaxing the deny list. New columncan_update_energy_cost BOOLEAN DEFAULT FALSEonapi_keys(backend/app/models/api_key.py) with idempotent migration inbackend/app/core/database.py— defaults FALSE so existing keys never silently gain settings-write capability on upgrade. New endpointPOST /api/v1/settings/electricity-priceinbackend/app/api/routes/settings.pyaccepts{"energy_cost_per_kwh": <float ≥ 0>}— the field name matches what the wiki already documented so the HArest_commandexample needs only a URL+method change, not a payload change. New custom dependencyrequire_energy_cost_update()inbackend/app/core/auth.pybypasses the_APIKEY_DENIED_PERMISSIONScheck for this one route for API keys withcan_update_energy_cost=True; JWT users still go through the standardSETTINGS_UPDATEpermission check; auth-disabled deployments allow it (matches other settings routes). Crucially, the generalPATCH /settingsroute remains denied for API keys — flipping the narrow flag does NOT widen general settings-write access (regression test pins this). Schema/route wiring inbackend/app/schemas/api_key.py+backend/app/api/routes/api_keys.pyaccepts and returns the new field on create/update/list. Frontend: fifth toggle "Update electricity price" added to the create-API-key card inSettingsPage.tsxwith an amber "Energy" badge on existing keys that have it set;APIKey/APIKeyCreate/APIKeyUpdatetypes inapi/client.tsgained the new field; 16 new i18n keys (updateEnergyCost,updateEnergyCostDescription,energyCostBadge) added to all 8 locales — full German translation, English fallbacks elsewhere per project convention. Wiki rewrites:features/api-keys.md— replaced the fictional 9-row permissions table with the actual 5 toggles plus an info box explaining why no general Write Settings / Admin exists.features/energy.md— Home Assistant section now points atPOST /api/v1/settings/electricity-price, instructs users to tick the new permission, and adds a deprecation warning for users who built the integration from the old (broken)PATCH /settingsexample. Tests:backend/tests/integration/test_settings_electricity_price.py— 8 tests covering create-with-flag, default-off, API-key-with-flag updates persist, API-key-without-flag → 403, JWT admin user with SETTINGS_UPDATE allowed, anon → 401, negative price → 422 (Pydanticge=0), and the critical regression testtest_patch_settings_still_denied_with_energy_flagthat pins the narrow-flag-doesn't-widen-PATCH contract.frontend/src/__tests__/pages/SettingsPage.test.tsx— 2 new tests: Energy badge renders for keys with the flag, the toggle's value flows through to the POST body when the box is ticked. All 8 new backend tests + 32/32 SettingsPage tests pass; ruff clean; i18n parity passes; frontend build clean. - Layer timelapse now starts for queue/VP-dispatched prints (#1353, reported by Andlar94) — Reporter's external camera + go2rtc setup was configured correctly (Obico was happily polling the snapshot URL for ML plate detection) but no MP4 was ever produced. Logs showed
[LAYER-TL] Stitching layer timelapse for printer 1after each print yet no frames were ever captured and no[LAYER-TL] Attaching timelapse...follow-up appeared. Root cause:layer_timelapse.start_session()was only called from the two new-archive paths inon_print_start(backend/app/main.py:2510fallback path and2600regular new-archive). The expected-archive branch atmain.py:1981-2052— where every reprint and every queue/VP-dispatched print lands — updated the existing archive's status toprintingbut never started a timelapse session. So_background_layer_timelapseran at print-complete time, calledtl_complete(printer_id), found no active session in_active_sessions, silently returnedNone, and the wrapper atmain.py:3917produced no log message for the no-session case. Every print that came through the queue (or any reprint) silently lost its timelapse. Fix: mirror the sameif printer.external_camera_enabled and printer.external_camera_url: start_session(...)call in the expected-archive branch right after_active_printsregistration. The two pre-existing paths are untouched. Help-text correction: the snapshot URL field's tooltip previously read "Single-frame URL used for notification thumbnails, finish photos, timelapse and plate detection" — which is technically true but read as if filling in the URL was sufficient to enable those features. Reworded across all 8 locales to "Timelapse and plate detection each require their own per-printer toggle — this URL is just the image source they pull from when active" so admins know they still need to enable plate detection per-printer (separate toggle) and that timelapse only fires while a print is running. Regression tests inbackend/tests/unit/test_layer_timelapse_expected_archive.py:test_expected_archive_path_starts_timelapse_when_external_camera_enabledexercises the fullon_print_startflow with a registered expected print +external_camera_enabled=Trueand assertsstart_sessionis called with the expected-print archive_id (not a freshly created one);test_expected_archive_path_skips_timelapse_when_external_camera_disabledkeeps the existing gate in place so we don't try to capture from a None URL. 2 new tests pass; ruff clean; frontend i18n parity passes; bundle builds. - Assign Spool now configures the slot even after a "Reset Slot" on A1 Mini BMCU / P1S Standard AMS (#1322 follow-up, reported by RosdasHH) — The original fix widened empty-slot detection to
state == 11 OR tray_type != "", which closed the configured-slot reconfig case (PETG-over-PLA) but didn't help the "Reset Slot on printer screen with spool still inserted" flow: on these firmwares the AMS reportsstate=3, tray_type=""after a Reset Slot regardless of whether a spool is physically loaded. The empty-detection therefore decided "empty", skipped the MQTT publish, marked the assignment pending, and waited foron_ams_changeto re-fire when the AMS transitioned to "loaded" — but the AMS never transitioned, because nothing was changing physically. A deadlock with no escape from user actions. Reporter pinned it by removing theif not slot_is_empty:gate atbackend/app/api/routes/inventory.py:1302and verified the firmware accepts the MQTT push when a spool is present, even withstate=3, tray_type="". The original guard's rationale — "Bambu firmware silently drops ams_filament_setting / extrusion_cali_sel for unloaded slots" — turned out to be over-cautious: it's load-bearing only for slots that the firmware itself explicitly marks empty viastate == 9("no spool") orstate == 10("spool present but no feed"). For ambiguous states (state=3default-idle, missing-state on older firmwares), the AMS doesn't give us a reliable signal at all, so the safest bet is to treat the user's explicit Assign click as their assertion that a spool is there and let the firmware decide what to do with the push. Fix: the empty-detection now only short-circuits onstate ∈ {9, 10}— every other state attempts MQTT.pending_configis now driven by either the explicit-empty signal ORnot configured(so a printer-offline / no-client publish failure still flags the assignment as awaiting follow-up). Theon_ams_changereplay logic atbackend/app/main.py:1031is unchanged and still serves as the safety net for state=9/10 slots whose spools get inserted later (and for any truly-empty slot the firmware dropped — DBfingerprint_typestays empty until an AMS push actually provides one, so the replay still fires). Trade-off: for the rare case of "assign to a slot that really IS empty + state=3", the badge will show "Configured" even though firmware silently dropped the push. Most users assign right after inserting, so this is a small UI honesty cost in exchange for unblocking the much more common Reset-Slot workflow. Follow-up optimization (also RosdasHH): the reporter then traced the raw MQTT payload and found that P1S / A1 Mini send only{"id": N}for a genuinely-empty slot — nostate, notray_type, no other fields. Without that signal, the assign path was firing one wasted MQTT publish per click on a truly-empty slot (firmware dropped it silently, but still). The AMS parser atbackend/app/services/printer_manager.py:788now detects the bare-tray shape (len(tray) == 1 and "id" in tray and state is None) and promotes it tostate=9— the firmware's explicit "no spool" code — which lets the inventory route's existingstate ∈ {9, 10}short-circuit apply. The detection is intentionally narrow: the post-Reset-Slot A1 Mini BMCU case sends a populated payload with empty values (state=3, tray_type=""), which has more than one key and stays unaffected — so the #1322 root fix is preserved. Regression tests inbackend/tests/integration/test_inventory_assign.py:test_post_reset_slot_with_state_3_still_fires_mqtt(renamed from the previous "marks_pending" test which was pinning the bug) andtest_state_missing_with_empty_tray_type_still_fires_mqtt(inverted from the legacy "older firmware empty → pending" assertion) pin the new behavior on the two firmware shapes the reporter hit.test_empty_tray_type_without_state_still_fires_mqttcovers the no-state SpoolBuddy case.test_no_ams_data_with_no_client_marks_pendingkeeps the printer-offline path producingpending_config=Trueso on_ams_change replay still triggers.test_state_empty_skips_mqtt_and_marks_pending(state=9) is unchanged — the firmware's explicit "no spool" still short-circuits correctly. The recentdd3e3f80k-profile fix was a separate red-herring path the reporter happened to also hit during testing; it stays as-is. All 28 inventory-assign tests + 312 inventory-tagged tests pass; ruff clean. Bare-tray follow-up tests:test_bare_tray_emulates_state_9andtest_populated_payload_with_empty_state_3_is_not_promotedinbackend/tests/unit/services/test_printer_manager.py— the second one is the explicit guard against regressing the #1322 root case by accident. - Firmware update dialog now survives Cloudflare-blocked or transient outages on
bambulab.com(#1350, reported by K1ngJony) — User's X1C on 01.10.00.00 saw "01.11.02.00 newer · Unavailable" plus the error "Firmware file for 01.11.02.00 is not available from Bambu Lab", and the logs showed repeatedFailed to get Bambu Lab page: 403warnings. Two problems stacked: (1)https://bambulab.com/en/support/firmware-download/all(the page Bambuddy scrapes to extract the Next.jsbuildIdused to fetch per-model JSON with download URLs) was returning 403 from the reporter's network — Cloudflare bot protection on bambulab.com is stricter than on the wiki and, prior to the 2026-05-12 compliance audit, the firmware-check service still claimed to be Chrome 120 via UA spoofing. The UA was updated to honestBambuddy/1.0in that audit butAccept/Accept-Languageheaders were never sent, so the request still tripped the "bare Python client" signal. (2) ThebuildIdwas cached in-memory only (1 h TTL), so every backend restart forced a fresh page fetch — meaning the first 403 from the user's network permanently broke download-URL resolution for that session even though the previous run had a perfectly valid buildId. Fix inbackend/app/services/firmware_check.py: (a) the httpx client now sendsAccept: text/html,application/json,*/*;q=0.8andAccept-Language: en-US,en;q=0.9alongside the existing honestBambuddy/1.0UA — both headers any normal client sends, no impersonation. (b)_get_build_id()gained a disk-cache layer at<data_dir>/firmware/build_id.json: successful fetches persist{build_id, fetched_at}to disk; the in-memory cache (fresh path, 1 h TTL) is checked first, then the disk cache seeds the in-memory slot on cold start, then the live fetch tries to refresh. On 403 or network error, we keep the cached buildId and set a newdownload_page_unreachableflag so callers can render an honest error. (c)_fetch_all_versions_from_download_pagenow retries once when a cached buildId returns 404 (Bambu rebuilt the page → invalidate + refetch + retry); on 403 it sets the unreachable flag and gives up gracefully without churning. Better error message inbackend/app/services/firmware_update.py: when a wiki-listed version has no download URL becausedownload_page_unreachableis true, the dialog now says"Could not reach Bambu Lab's firmware download page to fetch the file URL for X. Version is listed on the Bambu wiki but the download endpoint is unreachable from this network. Try again later, or download the firmware manually from bambulab.com and copy it to the printer's SD card."instead of the misleading"Firmware file for X is not available from Bambu Lab"(which implied Bambu didn't have the file, when actually we just couldn't reach Bambu). Version genuinely not in the catalog still gets the original message. Regression tests inbackend/tests/unit/test_firmware_versions.py:test_client_headers_identify_honestly_and_send_browser_acceptpins UA + Accept headers,test_build_id_is_persisted_to_diskconfirms the disk write on success,test_build_id_falls_back_to_disk_on_403reproduces the reporter's 403 with a pre-seeded disk cache,test_download_page_unreachable_flag_set_on_403_jsoncovers the per-model JSON endpoint 403 path,test_download_page_retries_once_when_buildid_staleproves the 404 retry. All 12 firmware tests + ruff clean. - Subtype dropdown on the Add/Edit Spool form now offers
CF(carbon fiber) andGF(glass fiber) (#1345, reported by maziggy) — The Subtype dropdown infrontend/src/components/spool-form/FilamentSection.tsxis populated from theKNOWN_VARIANTSarray infrontend/src/components/spool-form/constants.ts.CFandGFwere missing, so a user adding a third-party PETG-CF spool via the Material=PETG + Subtype=CF flow (the same shape Bambu's "PETG HF" already used) couldn't find the subtype in the list and had to type it freehand into the "create new" tail. Added both —CFto matchPETG-CF/PLA-CF/ASA-CF/PA-CF, andGFas the natural pair forABS-GF/PA6-GF.parsePresetNameinspool-form/utils.tsis unaffected: its materials list is iterated longest-first, so a cloud preset likeBambu PETG-CF Blackstill resolves to material=PETG-CFwith empty afterMaterial (the variant loop runs on""and finds nothing — no accidental Material=PETG / Subtype=CF rewrite). Frontend build clean. - Spool-assignment dialog stacks correctly: the material-mismatch confirmation appears above its parent, and dashboard filament hover popovers no longer get covered by sibling printer cards (#1336 follow-up, mismatch case reported by RosdasHH) — Two stacking-context regressions surfaced after the original z-50 → z-[100] bump on
AssignSpoolModallanded. (1) Material-mismatch ConfirmModal hidden behind its parent. Assigning a spool with a different material to the one configured on the slot opens a yellow warning ConfirmModal from insideAssignSpoolModal. ConfirmModal's overlay was hardcoded toz-50in its wrapper atfrontend/src/components/ConfirmModal.tsx, so once the parent moved toz-[100]the child sat behind it — the user clicked Assign, saw the parent dim slightly, and nothing visible to confirm. Added an optionaloverlayZIndex?: stringprop toConfirmModal(defaults toz-50so all 82 other call sites are untouched), and the mismatch site atAssignSpoolModal.tsx:584passesoverlayZIndex="z-[110]"so the warning sits above its parent. (2)FilamentHoverCard/EmptySlotHoverCardcovered by neighbouring printer cards. Hovering an AMS slot on the dashboard opens a "Jade White · Bambu PETG HF · K Factor 0.024 · 87% · Open in Inventory / Configure" popover. The popover was usingposition: absolutewithz-[60]inside its trigger — but each printer card on the dashboard creates its own stacking context (anyfilter: drop-shadow/transform/ positioned-with-z descendant is enough), andz-indexdoes not cross stacking-context boundaries: the next card in DOM order always wins regardless of how high the inner z-index goes. Visible as a "Jade White" tooltip getting half-eaten by the AMS-C tile column on the right neighbour card. Fixed by portaling both hover cards todocument.body(FilamentHoverCard.tsxviacreatePortalfromreact-dom) withposition: fixedand screen-space coordinates computed fromtriggerRef.current.getBoundingClientRect(). Coords are recomputed on visibility change, onscroll(capture phase) and onresizeso the popover tracks the trigger when the viewport moves; arequestAnimationFramere-measure after the initial paint avoids a one-frame flicker before the card has its rendered dimensions. Hover handlers wired on both the trigger AND the portaled card so moving the cursor from the slot tile onto the popover doesn't auto-dismiss it after 100 ms. The smart top/bottom placement logic (flips to below the trigger when there's not enough headroom above the fixed 56 px header) is preserved, as is the arrow pointer that points back at the slot.z-[60]stays — but it's now global because the popover lives at the root of the DOM, so it always beats dashboard widgets without conflicting with full-screen modals atz-[100]. All 20FilamentHoverCard, 17ConfirmModal, and 13AssignSpoolModaltests pass; frontend build clean. - Deleting a print archive no longer wipes its filament / time / cost / energy contribution from Quick Stats (#1343, reported by IndividualGhost1905) — Running the same model ten times and then deleting nine of the resulting archive entries (to keep the file list tidy) silently rewound the totals on the Statistics page:
total_prints,total_filament_grams,total_cost, and per-print energy all dropped back to whatever the surviving archive contributed, as if the other nine prints had never happened. Root cause: every metric inget_archive_statsatbackend/app/api/routes/archives.pyis recomputed on each render viaCOUNT/SUMover the livePrintArchiverows, so removing a row removes its contribution. (Energy in the default "Total" mode already survived archive deletion because it reads the smart-plug lifetime counters via_sum_live_plug_totals— that's the architectural shape we now generalise to the rest of the metrics.) Fix: soft delete with opt-in hard purge. New nullabledeleted_atcolumn onprint_archives(backend/app/models/archive.py) tracks rows the user removed from the UI. The DELETE endpoint atbackend/app/api/routes/archives.pynow accepts?purge_stats=true; default behaviour is to soft-delete — files removed from disk (still frees the storage), row hidden from listings, but the row stays in the table so the stats endpoint keeps counting it. Setting?purge_stats=truefalls back to the original hard-delete path for the rare case where the user actually wants the row out of Quick Stats too (e.g. failed prints that shouldn't pollute success-rate dashboards). The migration inbackend/app/core/database.pyadds the column dialect-conditionally —DATETIMEon SQLite,TIMESTAMPon PostgreSQL (PG doesn't acceptDATETIMEonALTER TABLEthe way it tolerates it insideCREATE TABLE) — plus an index ondeleted_atso theWHERE deleted_at IS NULLfilter that's now sprinkled across the listing queries stays cheap on big archive tables. Service-layer changes.ArchiveService.soft_delete_archiveis a new sibling ofdelete_archivethat reuses the existing on-disk path-safety checks (extracted into_resolve_archive_dir_for_deleteso soft and hard delete share the resolution rules — refuses paths outsidearchive_dir, refuses depth-zero paths) and flipsdeleted_at = now()aftershutil.rmtree. Listing methods now filterPrintArchive.deleted_at.is_(None):ArchiveService.list_archives,get_duplicate_hashes_and_names(a soft-deleted dupe must not inflate a group's count so the UI shows "1 of 1" instead of "1 of 10"),find_duplicates(both the exact-hash and the print-name paths), andArchiveComparisonService.find_similar_archives(both name-match and content-hash paths so the "Similar archives" panel doesn't suggest something the user just removed). The stats endpoint deliberately keeps NO filter — the whole point of #1343. Route-level reads tightened too:GET /archives/{id}returns 404 on soft-deleted rows so stale bookmarks don't expose hidden archives, search (both the SQLite FTS5 path and the LIKE fallback) skips them, the duplicate-group enrichment query inlist_archivesfilters them, and tag listing / archives-by-tag exclude them.GET /archives/slimandGET /archives/stats/exportintentionally do NOT filter so the dashboard widgets inStatsPage.tsxkeep aggregating across the full history. Frontend.ConfirmModalgained an optionalchildrenslot (frontend/src/components/ConfirmModal.tsx) so the delete-confirmation dialog can render an opt-in checkbox between the message and the action buttons without forcing a new bespoke modal.frontend/src/pages/ArchivesPage.tsx— both the card view and the detail view — now own adeletePurgeStatsboolean per component instance and pass it through toapi.deleteArchive(id, purgeStats)(frontend/src/api/client.tsappends?purge_stats=trueonly when the box is ticked). The checkbox resets to off on every modal close so the destructive option is opt-in per delete, never sticky. i18n: one new keyarchives.modal.deletePurgeStatsadded across all 8 locales — full German translation, English fallbacks elsewhere per project convention. Tests added tobackend/tests/integration/test_archives_api.py: soft delete preserves the row's contribution to total prints / filament / cost (the regression test for the reporter's exact scenario),?purge_stats=truedrops it from Quick Stats as before, soft-deleted archives 404 onGET /archives/{id}, soft-deleted archives are skipped by the search endpoint. All 42 pre-existing archive integration tests stay green, includingtest_delete_archive(which already asserts post-delete 404 — semantically equivalent under soft delete). FrontendConfirmModal(17 tests) andArchivesPage(23 tests) suites green, full build clean. - OIDC provider login icons now render again — the strict SPA CSP no longer breaks them (#1333, PR #1342 by netscout2001) — When an admin configured an OIDC provider with an external
icon_url(e.g.https://google.com/icon.png), the login page showed the browser's broken-image glyph instead of the IdP logo. Root cause: the SPA ships with the strict policyimg-src 'self' data: blob:so the entire admin UI cannot hot-link arbitrary external image hosts; admin-supplied icon URLs hit that wall on every render. Two options were on the table — loosenimg-srcto allowhttps:(one-line change but degrades the SPA's CSP everywhere), or proxy the bytes through the backend (this PR). The proxy path was chosen because (a) the SPA'simg-srcpolicy stays strict app-wide; (b) the existing MakerWorld thumbnail endpoint atbackend/app/services/makerworld.pyalready established the pattern with the same rationale; (c) anonymous login-page renders no longer leak each visitor's IP to the IdP host as a tracking signal — the proxy fetches the bytes once at admin-configure time and serves them from the same origin afterwards. Backend. NewMakerWorld-stylefetcher inbackend/app/services/oidc_icon.pystreams the response withfollow_redirects=False(so the SSRF host allowlist can't be bypassed via a 302 to a private address), enforces a MIME whitelist (PNG/JPEG/WebP/GIF; SVG is intentionally omitted — XML payloads carry too manyxlink:href/ external-ref corner cases for an MVP), and aborts at the first chunk past 1 MB so a hostile or misconfigured IdP serving a 500 MB payload cannot OOM the server. SSRF guardassert_safe_public_https_urlinbackend/app/api/routes/_oidc_helpers.pyis stricter than the Spoolman variant — Spoolman deliberately allows loopback / RFC-1918 (same-LAN deployment is the standard topology) while OIDC icons must live on the public internet, so private addresses there are SSRF probes. The shared SSRF data (cloud-metadata IP set covering AWS/GCP/Azure/Oracle/DO/Alibaba, numeric-encoded-IP regex, IPv4-mapped-IPv6 unwrap) was extracted tobackend/app/api/routes/_url_safety.pyso the two top-level guards share data but keep their distinct policies. The Pydantic_validate_icon_urlinbackend/app/schemas/auth.pynow lazy-imports the runtime SSRF guard so schema validation and the fetcher enforce the same allowlist — no drift between layers. Storage. Three new columns onoidc_providers(backend/app/models/oidc_provider.py):icon_data(LargeBinary,deferred=Trueso list queries don't pull the BLOB on every login-page render),icon_content_type(String(20), also serves as the has-icon indicator so the check never accidentally lazy-loads the BLOB),icon_etag(SHA-256 hex). A DB-layerCheckConstraintenforces the all-or-nothing triplet ((icon_data IS NULL) = (icon_content_type IS NULL) = (icon_etag IS NULL)) — fresh installs (SQLite + PostgreSQL) get it viametadata.create_all, stale PostgreSQL installs get it viaALTER TABLE ADD CONSTRAINTinbackend/app/core/database.py(SQLite cannotADD CONSTRAINTon an existing table, same trade-off as the existingdefault_group_idFK). The migration'sALTER TABLEis dialect-conditional —BLOBon SQLite,BYTEAon PostgreSQL. Routes. Four endpoints inbackend/app/api/routes/mfa.py:GET /oidc/providers/{id}/iconis public (no auth, same rationale as/api/v1/makerworld/thumbnail—<img>tags can't send Authorization headers, and the icon renders before the user is signed in), serves cached bytes with a strongETagandCache-Control: public, max-age=3600, supportsIf-None-Matchincluding theW/weak prefix, the*wildcard, and multi-token comma lists per RFC 7232.DELETE /oidc/providers/{id}/iconclears all four icon columns (URL + the three cached-bytes columns) — "Remove icon" means the whole record is gone, not just the cache, so the admin form doesn't end up in a confusing half-state where it shows a stale URL while the login page renders the Shield fallback.POST /oidc/providers/{id}/icon/refreshre-fetches from the stored URL for the "Refresh" button. Disabled providers respond 404 on the GET endpoint to avoid leaking their existence to anonymous callers.POST/PUTintegrate the fetcher transactionally: a failed fetch aborts with 400 before commit, so a bad URL on create leaves no half-configured row in the DB and a bad URL on update leaves the previous cached bytes intact. PUT with expliciticon_url: nullclears the icon record (detected via Pydantic'smodel_fields_set— distinct from "field omitted" which preserves it). Both fetch failures and SSRF rejections log at WARNING with the URL redacted (query string and fragment stripped via_redact_url_for_log) so admin-supplied presigned URLs carryingX-Amz-Signature=...or bearer tokens can't end up in operator log files. Frontend.frontend/src/pages/LoginPage.tsxextracts anOIDCProviderButtonsub-component so each provider owns its owniconFailedstate — on<img>error (provider deleted between page load and image fetch, network blip, etc.) the SPA swaps in theShieldfallback rather than showing the broken-image glyph to anonymous users.frontend/src/components/OIDCProviderSettings.tsxdoes the same withProviderIconAvatar(Globe fallback) and adds Refresh / Remove buttons. The new same-origin proxy URL helperapi.oidcProviderIconUrl(id)returns aSameOriginUrl-branded string so a future caller can't accidentally substitute an attacker-controlled URL where this is consumed. Four new i18n keys (refreshIcon,removeIcon,iconRefreshed,iconRemoved,iconFetchFailed) added across all 8 locales. Tests. About 100 new tests covering the streaming fetcher (MIME whitelist, status codes, redirect rejection, size-cap early-exit including the first-oversized-chunk guarantee, missing Content-Type distinct message,httpx.InvalidURLmapping), the OIDC SSRF guard (explicitly asserts that Spoolman-allowed cases like loopback / RFC-1918 /localhostare rejected here so the two guards do not silently converge), Pydantic-validator parity (numeric-encoded IPs, cloud metadata, multicast, IPv4-mapped IPv6 all rejected at schema-validate time), the dialect-conditionalALTER TABLEmigration (both BLOB and BYTEA paths via patchedis_sqlite()), the full create/update/delete/refresh flow including atomicity (failed fetch preserves prior state), the upgrade-path edge case (icon_urlpresent but no cached bytes → refetch on next save), ETag/304 withW/weak prefix and*wildcard, raw-SQL inconsistent-triplet 404 defence, the PG→SQLite-ZIP backup BLOB type-mapping round-trip, and a CSP regression-guard test inbackend/tests/integration/test_security_headers.pythat asserts the SPA default CSP block does not includehttps:inimg-src— so a future contributor "fixing" a broken icon by relaxing CSP discovers the proxy pattern instead. Frontend tests inLoginPage.test.tsxandOIDCProviderSettings.test.tsxcoverhas_icon: true|false, mixed providers on the same page,<img>error → Shield/Globe fallback, and per-provider state isolation (twohas_icon: trueproviders; firingerroron A leaves B's icon intact — locks in the sub-component extraction so a future hoist ofuseStateto the parent loop is caught by CI). Manually verified end-to-end against a live PocketID instance with multiple icon URLs. Follow-on tightening:has_iconis now a required field onOIDCProviderResponse(no Pydantic default — fails loudly if any future caller skips_build_provider_response), backed by anOIDCProvider.has_iconpropertyreadingicon_content_type. Inupdate_oidc_providerthe icon refetch was moved BEFORE the setattr loop, so on fetch failure the in-memory ORM object stays consistent (DB row was already safe viaget_db()'s rollback; this closes the in-memory window too). Patched by netscout2001. - Backup tab indicator dot now turns green when Scheduled (local) Backups is enabled (#1331, PR #1338 by chanakyan-arivumani) — Toggling Scheduled Backups on inside Settings → Backup left the sidebar tab indicator dot stuck on grey: the visual cue that there's an active backup configuration was lost for users who run scheduled local backups without GitHub. Two stacked layers caused it: (1) the dot condition at
SettingsPage.tsx:1461only checked the GitHub chain (cloudAuthStatus?.is_authenticated && githubBackupStatus?.configured && githubBackupStatus?.enabled);settings?.local_backup_enabledwas never consulted, so the scheduled-backup state had no path to the indicator. (2) The toggle handler inGitHubBackupSettings.tsxcalledapi.updateSettings({ local_backup_enabled })but never invalidated the['settings']query cache, soSettingsPagekept reading the stale value — the indicator would only update on a full page reload even if the condition fix were in place. Two-line fix: extend the dot's predicate to... || settings?.local_backup_enabledand addqueryClient.invalidateQueries({ queryKey: ['settings'] })after a successful save (matching the existing invalidation pattern atGitHubBackupSettings.tsx:402/463/477/497). The GitHub-chain short-circuits first so the common case is unchanged. Patched by chanakyan-arivumani. - Color catalog presets now apply
extra_colors(gradient stops) andeffect_type(sparkle / wood / marble / glow / matte) onto the spool, not just hex + name (#1340, reported by maugsburger) — Creating a catalog entry that pairs a base color with multi-color gradient stops and a visual effect, then clicking that swatch in the Edit Spool dialog, only copiedcolor_nameandrgbaover — theextra_colorsandeffect_typefields were silently dropped. The data was flowing from the backend correctly (GET /api/v1/inventory/color-catalogreturns both fields per theColorCatalogEntryschema infrontend/src/api/client.ts), but three layers above stripped them: (1)SpoolFormModal.tsxtyped itscolorCatalogstate with a narrower shape that omitted the two fields; (2)ColorSection.tsxmapped catalog entries toCatalogDisplayColor(the typed-down shape rendered on swatches) without propagating them; (3) theselectColor()handler only setrgba+color_nameon click. Fix: widened both types inspool-form/types.ts(CatalogDisplayColor+ColorSectionProps.catalogColors) to carry the optionalextra_colorsandeffect_type, propagated them through the fourmatchedCatalogColorsmapping callbacks (byBrand / exact full-material / normalized-trailing-+/ base-material prefix), and extendedselectColorto take optionalextraColors/effectTypeparameters. Semantic rule: catalog swatches are complete presets — picking one writes BOTH gradient and effect from the entry (overwriting any existing values), so a gradient catalog entry applies its stops AND a solid catalog entry clears any old gradient that was on the spool. Recent-colors and the hardcoded-fallback palette are plain hex pickers — picking one keeps any existingextra_colors/effect_typeuntouched, since those swatches aren't presets, just color picks. Bonus: fixed the en-US spelling drift the reporter flagged in their nitpick —'Extra colours'and'wrong colour loaded'strings (which had been seeded into all 8 locale files as English fallbacks) standardized to'Extra colors'and'wrong color loaded'; matching comment blocks (// Multi-colour ...) normalized in the same pass. Regression tests in__tests__/components/spool-form/ColorSectionCatalogExtras.test.tsx(3 cases): catalog click with gradient + effect propagates all four fields toupdateField, catalog click on a solid preset clears any pre-existing extras/effect (preset-replaces-look semantic), and fallback palette click leaves extras/effect untouched. All 23 spool-form tests + 8 i18n parity tests pass; build clean. - Assigning a spool to an unconfigured AMS slot no longer silently skips MQTT on A1 Mini / P1S firmware — and the "PETG over a PLA-configured slot won't reconfigure" symptom is fixed in the same change (#1322, reported by RosdasHH) — On the user's A1 Mini BMCU (firmware 01.07.02.00) and P1S Standard AMS (firmware 00.00.06.75), pressing "Assign Spool" on any slot left the slot unconfigured: the DB row was created with
pending_config=True, the MQTT publish was skipped, and the log linePre-configured assignment: ... (slot empty, will configure on insert)fired even though the spool was physically loaded. The same code path also blocked the "swap PLA to PETG in the same slot" flow — Bambuddy would keep treating the spool as PLA because the publish never reached the printer. Root cause: the empty-slot detection atbackend/app/api/routes/inventory.py:1267preferredtray.state == 11("filament fed to extruder") overtray_type, falling back totray_typeonly whenstatewas missing entirely. Reporter's AMS dumps showedstate == 3on every slot — configured and unconfigured, on both printers — andstatewas never absent. So the state-only branch always fired, the result was always "empty", and MQTT was always skipped regardless of whether the slot was actually loaded. The "fingerprint_type empty → defer until insert" pre-config replay atbackend/app/main.py:1026had the samecur_state == 11gate, so even when the user manually configured the slot in Bambu Studio afterward (makingtray_typego from""to"PLA"), the deferred MQTT publish never fired because state stayed at 3. Fix: both call sites now use a disjunction — the slot is treated as loaded when eitherstate == 11ortray_typeis non-empty. The "Reset slot" case (state=11 + tray_type="") that the original state-only check was protecting still works through the first clause; the configured-slot case (state=3 + tray_type="PLA") on firmwares that never set state=11 now works through the second; and truly empty unconfigured slots (state≠11 + tray_type="") still fall through to the pending-config path correctly. The on_ams_change replay's disjunction also fires the deferred publish when the user later configures the slot through Bambu Studio, since that flipstray_typenon-empty even if state stays at 3. Caveat: for a truly empty slot with a 3rd-party non-RFID spool that the user physically inserted, neither signal points to "loaded" on these firmwares, so we still can't auto-fire the publish until the slot gets configured (manually or by another assign). The pending-config row persists in the DB and gets applied on the next AMS push that flipstray_typenon-empty. Regression tests: 3 intest_inventory_assign.py—test_state_never_eleven_firmware_with_loaded_tray_fires_mqtt(state=3 + tray_type='PLA' → MQTT fires; pins the reporter's primary symptom and the PETG-over-PLA secondary symptom which goes through the same predicate),test_state_never_eleven_firmware_with_empty_tray_marks_pending(state=3 + tray_type='' still pending — confirms the disjunction didn't accidentally turn truly empty slots into the loaded branch), andtest_on_ams_change_fires_replay_when_tray_type_appears_without_state_11(pre-existing SpoolBuddy-style assignment with empty fingerprint; tray_type going''→'PLA'on a state=3 firmware fires the deferred publish even though state never becomes 11). All 28 tests in the file pass; ruff clean. - Assign Spool / Inventory search: numeric spool ID lookup is back, and Unassign in Spoolman mode no longer stays permanently disabled (#1336, reported by S0liter) — Two independent regressions surfaced from the same report. (1) Numeric ID search: typing a Spoolman spool's numeric ID into the search box on the "Assign Spool" dialog (or on the Inventory page) returned no results. The shared search helper
spoolMatchesQueryatfrontend/src/utils/inventorySearch.ts:7only checked the text fields (material,brand,color_name,subtype,note,slicer_filament_name,storage_location) — the spool'sidwas not part of the predicate, so a query like12only matched when "12" happened to be a substring of one of the text fields. One-line fix: the predicate now also testsString(spool.id).includes(q), mirroring the case-insensitive substring semantics of the other fields. Covers both call sites: the Assign Spool dialog (AssignSpoolModal.tsx:255for local inventory +:446for Spoolman) and the main Inventory page (InventoryPage.tsx:871). New regression test in__tests__/utils/inventorySearch.test.tspins exact-match ('42'→ id 42), substring ('4'→ id 42), and non-match ('99'→ id 42 rejected) so the predicate can't drift back into "text only" silently. (2) Unassign button stuck disabled in Spoolman mode: opening the edit modal on a Spoolman spool that was assigned to an AMS slot left the Unassign button greyed out — the user had no way to release the spool back to "available". The modal atSpoolFormModal.tsx:526only ever queriedapi.getAssignments()(the legacy localspool_assignmentstable) and looked up bya.spool_id === spool.id. In Spoolman mode the slot assignment lives in the separatespoolman_slot_assignmentstable, keyed byspoolman_spool_id— so the lookup always returnedundefined, the button'sdisabled={isPending || !spoolAssignment}predicate stayed true forever, andunassignMutationwas also pointing at the wrong API (unassignSpoolinstead ofunassignSpoolmanSlot). Both the query and the mutation now branch on the existingspoolmanModeprop: Spoolman mode usesgetSpoolmanSlotAssignments()+ lookup byspoolman_spool_id+unassignSpoolmanSlot(spool.id)and invalidates thespoolman-slot-assignments-all/spoolman-slot-assignmentsquery keys; local mode keeps the existing path unchanged. Two new regression tests in__tests__/components/SpoolFormModal.test.tsx(SpoolFormModal — Unassign button (#1336)): the button is enabled and clicking it callsunassignSpoolmanSlot(42)when a matchingspoolman_slot_assignmentexists, and the button stays disabled (nounassignSpoolfallback) when no assignment exists. All 12 search-helper tests + 13 InventoryPage search tests + 27 SpoolFormModal tests pass; frontend build clean. - Spoolman auto-create no longer labels Bambu Lab RFID spools with competitor names like "3DXTECH™ Black" (#1309, PR #1330 by ojimpo) — When Bambuddy auto-created a Spoolman filament entry for a Bambu Lab RFID spool, the second-stage lookup against Spoolman's external library (
GET /api/v1/external/filament, served from SpoolmanDB) matched onmaterial + color_hexonly — there was nomanufacturer/vendorfilter. The catalog is multi-vendor and roughly ID-sorted: for PLA +#000000(black) it contains 64 entries, with the first hit being3djake_pla_black_1000_175_n(3DJAKE), the third being3dxtech_pla_carbonxcarbonfiberblack_500_175_p(3DXTECH, nameCarbonX™ Carbon Fiber Black), and the actualbambulab_pla_black_1000_175_nnot surfacing until position 15. Bambuddy therefore created the filament under the Bambu Lab vendor but labeled it with a competitor's product name. Real-world observations in production: Bambu Lab ABS Black created as3DXTECH™ Black, Bambu Lab PLA Support picked the adjacent / wrong variant instead ofbambulab_pla_supportforpla/petgblack_500_175_n, and PLA Basic Black created asPLA(material, notPLA Basic). A secondary issue compounded this:_create_filament_from_externaldropped the external entry'sdensityfield, so even when the correct entry was eventually picked the density got overwritten bycreate_filament's built-in PLA-default 1.24 fallback instead of the catalog's actual value (1.26 for PLA Basic, 1.31 for PETG, etc.). Fix inbackend/app/services/spoolman.py::_find_or_create_filament: (1) the external-library loop now filters bymanufacturer == "Bambu Lab"(case-insensitive, whitespace-trimmed), with a defensiveid.startswith("bambulab_")fallback that handles entries where themanufacturerfield is missing or has drifted in a future SpoolmanDB schema. (2) When multiple Bambu Lab candidates match the samematerial + color_hex, the function prefers the entry whosenameequals the AMStray_sub_brands(lowercase+strip comparison) so the more specific variant wins —PLA Basicover genericBlack,Support for PLA/PETG Blackover genericBlack, etc. (3)_create_filament_from_externalnow propagatesexternal.get("density")through tocreate_filament; when the catalog entry has no density set, the existing material-table fallback insidecreate_filamentstill kicks in via theif density is Nonebranch at line 321 — no path lost. Behavioural caveat the user needs to know: previously-created mis-named filaments are NOT auto-renamed by this fix. Step 1 of_find_or_create_filamentis the internal-Spoolman-filament loop that short-circuits on(vendor == "Bambu Lab", material, color_hex)— and that path is unchanged. Any Bambu Lab filament created by an older Bambuddy build (or hand-edited by the user) will continue to be matched and reused on subsequent AMS reads, regardless of how wrong its name is. To pick up the corrected name, the user has to delete the mis-named filament in Spoolman once — then the next AMS read for the same material+color falls through to the external-library step and creates a new entry with the correct Bambu Lab name. This is deliberate: some users may have intentionally renamed Bambu Lab filaments (e.g. to follow their own naming convention or to merge variants) and a silent auto-rename would undo that. Regression tests intest_spoolman_service.py::TestFindOrCreateFilament(6 new): internal short-circuit preserves the existing match without touching the external library, non-Bambu-Lab external entries are skipped even when they sort first in SpoolmanDB,PLA Basicwins over genericBlackvia thetray_sub_brandstiebreaker (per maintainer request on #1309), no-match-anywhere falls back totray_sub_brands or tray_typeinstead of leaking a competitor name into the create call,id.startswith("bambulab_")accepts entries with absentmanufacturerfield, and density propagates end-to-end through the public method instead of getting clobbered by the material-default. All 44 tests intest_spoolman_service.pypass; ruff clean. Reported and patched by ojimpo. - Safety: bed-jog Z direction was inverted on A1 / A1 Mini — "Up" rammed the nozzle into the bed (#1334, reported by william.filipcicgmail.com) — On A1 / A1 Mini, clicking the "Up" arrow on the printer-card bed-jog control would send the nozzle straight into the build plate. Reporter triggered it with the 50 mm step and crashed their nozzle. Root cause: the bed-jog UI was designed against the X1 / P1 / H2 family's bed-on-Z convention. On those printers the bed is the Z-axis, Bambu's firmware homes Z=0 at the top, and
G1 Z-raises the bed toward the toolhead (decreases the nozzle-bed gap). The frontend maps "Up" to negative distance with that convention in mind. A1 / A1 Mini are bed-slingers: the bed moves on Y, the toolhead moves on X+Z, and the firmware uses standard cartesian Z (Z+ = toolhead up). On those modelsG1 Z-10drives the toolhead down 10 mm — straight through any clearance the user had — which is exactly what the reporter saw. There was no model classification at the bed-jog code path; every printer got the same X1-convention G-code. Fix: newis_bed_slinger(model)helper atbackend/app/services/printer_manager.py(sibling to existingsupports_chamber_temp/has_stg_cur_idle_bug, reuses the already-definedA1_MODELSfrozenset which covers display names and internal codesN1/N2S). The bed-jog route atbackend/app/api/routes/printers.py:2710now inverts the signed distance before emitting the G-code when the printer model is in that set, so the UI "Up" semantics ("decrease nozzle-bed gap") stay consistent regardless of which physical part moves on the printer. Frontend stays untouched — single source of truth for the direction logic lives in the backend, keyed off the printer'smodelcolumn, so any future bed-slinger Bambu model only needs one frozenset update. The route'sQuerydescription and docstring now state the new contract explicitly: distance is the gap adjustment, not the raw Z value, and the backend translates per model. Regression tests: 13 intest_bed_jog.py::TestBedJogAPI— 6 parametrised cases prove bed-on-Z models (X1C / P1S / H2D / H2S / H2C / P2S) still emitG1 Z-10.00for a UI "Up" click (pass-through), 6 parametrised cases prove A1 / A1 Mini / A1MINI / A1-MINI / N1 / N2S emitG1 Z10.00instead (inverted, toolhead up), plus 1 symmetric "Down arrow drops the toolhead viaG1 Z-" case. 5 intest_printer_manager.py::TestIsBedSlingerpin the helper's classification contract — A1 family true, every bed-on-Z model false, None / empty-string safe, case-insensitive. Safety note: if you own an A1 or A1 Mini and were running any 0.2.x build before this release, do not use the printer-card bed-jog buttons — they will move the toolhead in the wrong direction. The Z controls in Bambu Studio / Bambu Handy are unaffected (they generate their own model-aware G-code). - Spoolman inventory: editing a spool's color name no longer "reverts" to the subtype on save (#1319, reported by MartinNYHC) — On Spoolman-backed inventory, changing a spool's color name in the edit dialog appeared to accept the new value but the inventory list column and the next edit-dialog open showed it back to the subtype string. Three layers stacked on top of each other to produce this: (1)
find_or_create_filamentatbackend/app/services/spoolman.py:609matches existing Spoolman filaments bymaterial / name / color_hex / vendor—color_nameis intentionally not part of the match key (Spoolman doesn't standardise the field and most installs leave it null) — but when a match was found it returned the existing filament's id unchanged, silently dropping the newcolor_namevalue. The write never reached Spoolman. (2) On re-read, the helper at_spoolman_helpers.py:279falls back tosubtypewhenfilament.color_nameis empty (without the fallback, Spoolman installs that don't fill the field would render every spool as "Unknown color"). The persisted value was still empty, so the read synthesised the column fromsubtype. (3) The edit form prefilledcolor_namefromspool.color_name— which on Spoolman installs withoutcolor_namewas the synth value (= subtype). If the user changedsubtypebut notcolor_name, the form silently round-tripped the OLD subtype back to Spoolman as if it were a real user-setcolor_name, which then started showing up as the persisted value on the next render — the exact "color reverts to subtype" pattern in the bug report. Fixes: (1)find_or_create_filamentnow patches the matched filament'scolor_namevia the existingpatch_filamentPATCH wrapper when the request differs from what's stored. Convention on the parameter:None= "don't touch",""= explicit clear (patches Spoolman tonull), any other string = set/update. (2) The PATCH route atspoolman_inventory.py:567now uses Pydantic'smodel_fields_setto distinguish "field omitted" from "field explicitly set to null" — only the latter is a clear (mirrors the existingstorage_locationpattern at the same site). (3) The map helper now also returnscolor_name_is_synthesized: boolon every inventory record, andSpoolFormModal.tsxchecks it on prefill so the input starts blank when the value was synthesised from subtype — the user sees the real stored state and can't accidentally round-trip the synth value back. The read-side fallback is kept on purpose (the list-display "Unknown color" problem hasn't gone away — it's just that the form no longer treats the fallback as a real value). Apatch_filamentfailure is caught and logged but doesn't block the match — the spool still links to the correct filament, only the colour-name update is dropped, which is the safer failure mode. Regression tests: 5 intest_spoolman_inventory_methods.py::TestFindOrCreateFilament— patch-on-change, no-patch-when-unchanged, no-patch-when-None, clear-when-""-passed, and patch-failure-still-returns-match-id. 2 intest_spoolman_inventory_helpers.py::TestMapSpoolmanSpool—color_name_is_synthesizedflag isFalsewhen a real value is stored,Truewhen the fallback fires. 2 integration tests intest_spoolman_inventory_api.py— wire-levelcolor_name=nullclears (route translates to""), andcolor_nameomitted from the PATCH body keeps the current value (route passesNone). All 564 spoolman-tagged tests pass; ruff clean; frontend build clean. - Deleting an SSO user left orphan OIDC/MFA/camera-token rows on SQLite — blocked re-login and leaked auth state (#1285, PR #1295 by netscout2001) — On SQLite (default deployment) the
delete_userroute left orphan rows inuser_oidc_links,user_totp,user_otp_codes, andlong_lived_tokensbecause the project intentionally runs withPRAGMA foreign_keys=OFF, so theON DELETE CASCADEdeclared on those tables never fired. Reported symptom: an admin deleted an OIDC-provisioned user, the user tried to re-login via SSO, the OIDC callback found the orphanUserOIDCLinkpointing at the (now missing) user, failed to resolve it, and redirected toaccount_inactiveinstead of triggeringauto_create_users. The same root cause was leaking MFA secrets (user_totp), pending email OTP codes (user_otp_codes), and per-user camera-stream tokens (long_lived_tokens—verify()would happily match bylookup_prefixeven after the owning user was gone). PostgreSQL deployments were unaffected — cascade was firing there. Fix: mirrors the existingAPIKeycleanup pattern indelete_user(introduced in PR #1182).backend/app/api/routes/users.py:delete_usernow explicitly deletesUserOIDCLink,UserTOTP,UserOTPCode, andLongLivedTokenrows owned by the user; also folds inPrintBatch.created_by_idcleanup (sameondelete=SET NULLSQLite-FK-off root cause, theSET NULLblock atusers.py:393-407was missing it).backend/app/core/database.py:run_migrationsgains an idempotent startup orphan-cleanup that sweeps the four auth tables (DELETE FROM <table> WHERE user_id NOT IN (SELECT id FROM users)), wrapped inbegin_nested(), logged at INFO only when rows actually drop — so installations carrying orphans from before the fix are healed automatically without manual DB intervention. No-op on Postgres (cascade already fired) and idempotent on SQLite (second run finds nothing).backend/app/api/routes/mfa.py:list_oidc_linksreturns"<deleted>"forprovider_namewhenlink.provideris null instead of raisingAttributeError— covers the symmetric edge case where aUserOIDCLinkcould reference an orphaned provider. Tests: 14 new/extended.test_users_auth_cleanup.py(new): 5 tests verifydelete_userremoves OIDC/TOTP/OTP/long-lived-token rows individually + combined-cleanup atomically.test_oidc_relogin.py(new): full end-to-end test reproducing the #1285 symptom — mocked IdP, first OIDC login, admin delete, second OIDC login provesauto_create_usersfires again (and pinned the regression boundary by confirming the test fails without the fix).test_orphan_auth_cleanup_migration.py(new): 7 tests for per-table cleanup across all four auth tables, idempotency, no-op on fresh install, and survival of rows belonging to real users.test_mfa_api.pyaddsTestListOidcLinksDefensiveProviderNullfor the null-check.test_auth_api.py::test_delete_userextended to assert all five auth-table side effects (UserOIDCLink,UserTOTP,UserOTPCode,APIKey,LongLivedToken). All 13 PR-added tests + 194 tests in extended files pass; ruff clean. Reported and patched by netscout2001. - Slicer bundle import 400/502/503 errors now land in the log so support bundles tell us why (#1312, reported by hasmar04) — Reporter hit
400 Bad RequestfromPOST /api/v1/slicer/bundleswhen uploading a Bambu Studio Printer Preset Bundle (.bbscfg); a second contributor had reported the same shape the day before. Same bundle file uploaded fine on Martin's dev machine, which strongly points at sidecar-side differences (image version, write permissions onDATA_PATH/bundles, TrueNAS Docker volume perms, etc.) — but triage was blocked because the sidecar's actual reject reason only made it as far as the FE toast. Bambuddy logged just the uvicorn-access line (POST /api/v1/slicer/bundles HTTP/1.1 400), with no detail in the support bundle. The route atbackend/app/api/routes/slicer_presets.py:import_slicer_bundlenow emits alogger.warningfor each of the three failure shapes: 400 (SlicerInputError) — sidecar's reject string is logged alongside the filename and byte count, so we can see "bundle rejected becausemanifest.jsonis missing" in the next support bundle without asking the reporter to copy the toast text. 503 (SlicerApiUnavailableError) — logs the configured sidecar URL plus the exception detail (separates "URL wrong" from "sidecar offline"). 502 (SlicerApiError) — logs filename + byte count + error string, useful when the sidecar'sDATA_PATH/bundleswrite fails (the typical 5xx cause on this path). The 400 case isWARNINGrather thanINFOdeliberately — it's an unexpected end-user-visible failure, not a routine event. Existingtest_import_bundle_sidecar_400_passes_throughnow also asserts the reject reason AND the filename appear in caplog, so the support-bundle-includes-the-diagnostic contract is pinned. Doesn't fix #1312's actual root cause (sidecar-side, still under investigation with reporter) — but the next reporter we get on this code path will produce a bundle that contains the answer. - Restarting Bambuddy mid-print triggered plate-check pause + duplicate archive (#1304, reported by kleinwareio) — When a P1S print was in progress and the user updated the Bambuddy container (
latest→dailyin the report, but the same path fires on any restart), Bambuddy paused the live print with an "Object detected on build plate" warning AND re-archived the in-progress file as a duplicate. Root cause: the print-start detector atbackend/app/services/bambu_mqtt.py:2780gated onself._previous_gcode_state != "RUNNING", which is true whether we just saw IDLE→RUNNING (a real print start) OR we just constructed a fresh BambuMQTTClient and_previous_gcode_stateis still its initialNone(catch-up push from a printer already running). The fresh-client case firedon_print_start, which downstream ran the plate-detection-and-pause flow atmain.pyAND the FTP-download-and-archive flow — exactly the two symptoms in the bug report. Fix: addedself._previous_gcode_state is not Noneto theis_new_printguard, so the first push from the printer in a new process lifetime never counts as a state transition into RUNNING._was_runningstill flips toTruevia the unconditional "Track RUNNING state" block atbambu_mqtt.py:2795, so print-completion detection keeps working — only the start callback is suppressed. Three existing tests that asserted on the old (buggy) behavior were updated to seed_previous_gcode_state = "IDLE"first, matching the realistic lifecycle of a print actually starting (Bambuddy has been observing IDLE/FINISH before RUNNING); they now exercise the correct path. New regression testtest_first_running_push_after_bambuddy_restart_does_not_fire_print_startpins the contract for the reporter's exact scenario — and asserts that_was_runningstill becomes True so completion still fires when the print ends. Theis_file_changebranch was unaffected (it already required_previous_gcode_file is not None, so restart-catch-up never reached it anyway). - Create User form rejected weak passwords with an opaque "HTTP 422" toast (#1303, reported by TrickShotMLG02) — Three independent UX gaps stacked on top of each other. (1) Discoverability: the Create User and Edit User modals showed no hint about the backend's password complexity requirements (
min 8 chars+ uppercase + lowercase + digit + special character; enforced inbackend/app/schemas/auth.py:_validate_password_complexity). Reporter typed an 8-character all-digits password and had no way to know why it failed. (2) Validation mismatch: the frontend's pre-submit check atSettingsPage.tsxwas onlypassword.length < 6, accepting passwords the backend would reject — every weak password got bounced after the round-trip instead of getting blocked locally. (3) Error display fragility: when the backend returned a 422 with a Pydantic detail array, the API client's error parser atfrontend/src/api/client.ts:107could fall through to the bareHTTP ${status}fallback if the mapped/filtered detail array ended up empty after stripping the"Value error, "prefix — masking the real reason as just "HTTP 422". Fixes: (1) added apasswordRequirementshelper line under both password inputs in Create User / Edit User; (2) extractedcheckPasswordComplexityintofrontend/src/utils/password.ts, called fromhandleCreateUserandhandleUpdateUserbefore the API request — it returns the same FIRST failing rule the backend's validator would have flagged (uppercase before lowercase before digit before special, matching_validate_password_complexity's order — fixing one rule shouldn't immediately trip a different message), and the submit button is disabled until all rules pass; (3) the API client now falls back toJSON.stringify(detail)when the mapped array is empty, so a malformed but non-empty 422 detail surfaces SOMETHING informative instead of a bare status code. New translation keyssettings.passwordRequirements,settings.toast.passwordNeeds{Uppercase, Lowercase, Digit, Special}, plus the existingpasswordTooShorttext updated from "6 characters" to "8 characters". English + German fully translated (German reporter's locale); FR/IT/PT-BR translated using straightforward equivalents; JA/ZH-CN/ZH-TW seeded with English for the new complexity messages (existing project flow for new strings). 7 new unit tests infrontend/src/__tests__/utils/password.test.tspin the validator's contract, including the reporter's exact"12345678"input which now produces a local "Password must contain at least one uppercase letter" toast instead of a 422 round-trip. - External NAS scan hung forever and never committed subdirectories (#1299, reported by joeferrante) — Linking an external mount with ~1200 subdirectories caused the "Link External Folder" modal to spin until the FE gave up, after which the mount appeared in the sidebar but with no subdirectories, and subsequent scans had no effect either. The reporter's support bundle pinpointed two compounding problems. (1)
TypeError: unsupported operand type(s) for /: 'str' and 'str'on every STL — 1,606 instances in the log.generate_stl_thumbnailatstl_thumbnail.py:119doesthumbnails_dir / thumb_filename, which requires aPath, but the external-scan call site atlibrary.py:1256passed both arguments asstr(generate_stl_thumbnail(str(filepath), str(thumb_dir))). Every STL crashed inside thetry/exceptand got logged at WARNING level — visible spam but more importantly wasted work (trimesh.load()and matplotlib setup ran before the failing division). Fix: defensivePath()coerce at the top ofgenerate_stl_thumbnailso the function works regardless of how callers pass args. Regression testtest_string_arguments_accepted_without_typeerrorpins the contract. (2) Scan ran STL thumbnail generation synchronously inside the HTTP request — even after fix (1),trimesh.load()+ matplotlib render is 1–5 seconds per STL; on a NAS with thousands of STLs that's hours of work blocking the modal. Frontend would time out, user would refresh, the HTTP request would be cancelled,db.commit()atlibrary.py:1331would never run, and no folder/file rows would be committed — which is exactly why "subsequent scans have no effect" (each retry started from scratch and hit the same wall). Fix: scan now defers STL thumbnails to a background task. Afterdb.commit(), the route spawnsasyncio.create_task(_backfill_external_stl_thumbnails(folder_ids))with the full set of folder IDs fromfolder_cache.values()(covers both pre-existing subfolders AND the ones created during this scan —all_folder_idsis snapshotted before the walk and would have missed the new ones), then returns immediately. The background task opens its ownasync_session, walks every STL file withthumbnail_path IS NULLin the linked folder tree, generates each thumbnail, and commits per-file so a server restart mid-run only loses the in-flight thumbnail. Survives FE refresh because the task lives in the FastAPI event loop, not the request scope. The reporter's smaller mount (/mnt/NAS_3d_files/3mf_Files, 4 subdirectories) used to work because it completed inside the FE timeout window — with this fix, the 1200-subdir parent mount completes equally fast and thumbnails fill in over the following minutes. Auto-scan after create unchanged:FileManagerPage.tsx:1147-1151still callsscanExternalFolderimmediately aftercreateExternalFolder, which is correct UX — what changed is that the scan response now arrives in seconds instead of timing out. - MakerWorld "Open Cloud settings" link landed on the wrong page (#1300) — On the MakerWorld page, the "Open Cloud settings" hyperlink shown in the sign-in-required banner (when no Bambu Cloud token is stored) pointed at
/settings?tab=cloud. The Settings page has nocloudtab (its tabs are general/plugs/notifications/queue/filament/network/apikeys/virtual-printer/spoolbuddy/failure-detection/users/backup), so the URL-param check atSettingsPage.tsx:179(validTabs.includes(tabParam) ? tabParam : 'general') silently fell back to the General tab. The Bambu Cloud login UI actually lives on the Profiles page (/profiles), which already defaults its sub-tab tocloud— the same destination the existingbackup.cloudLoginRequiredi18n string ("Sign in under Profiles → Cloud Profiles…") documents. One-line fix inMakerworldPage.tsx:438:to="/settings?tab=cloud"→to="/profiles". The Profiles page'suseState<ProfileTab>('cloud')(line 2822) means no query param is needed — landing on/profilesopens the Cloud sub-tab directly. - External-spool prints no longer credit usage to AMS slot 0's Spoolman spool (#1276, reported and diagnosed by ojimpo — regression of #853) — On a single-filament external-spool print (TPU loaded in
vir_slot id=254on the reporter's H2S + AMS 2 Pro),_resolve_global_tray_idinspoolman_tracking.pywas crediting the usage to whatever Spoolman spool happened to be linked to AMS slot 0 — a completely unrelated material in the reporter's case. ~48.94 g of TPU was credited to a PLA spool across 4 prints before they noticed. Root cause: BambuStudio encodes virtual tray IDs (254/255) as-1in the flatams_mappingarray it sends to the printer (a convention already documented inbambu_mqtt.py:start_print()), but the spoolman tracking helper was treating-1as "unmapped → use position-based default" and the default mappedslot_id=1→global_tray_id=0. Whenslot_to_tray[slot_id-1] == -1andams_trayscontains an external slot (254 or 255), the helper now returns the external tray ID directly, matching the conventionstart_print()uses on the other side of the pipeline. Prefers 254 over 255 (consistent with single-nozzletray_nowreporting and thevir_slotid=255→254 remap inbambu_mqtt.py:864). Legacy behavior preserved whenams_traysis empty or contains no external slot (callers that don't passams_trayskeep the position-based fallback). Two regression tests cover the reporter's exact scenario (ams_trays={0,1,2,3,254}, slot_to_tray=[-1]→ 254) plus the H2D-deputy case and the fall-through-when-no-external case. Root cause investigation and patch by ojimpo. - Virtual-printer queue mode now honors workflow default print options (#1235, reported by jc21, root cause and patch by jc21 in #1277) — Prints sent from Bambu Studio (or any slicer) to a VP in
print_queuemode arrived in the queue withbed_levelling,flow_cali,vibration_cali,layer_inspect, andtimelapseset to the SQLAlchemy column-level defaults, never the user's workflow preferences. The reporter happened to have every workflow default set to the opposite of the column defaults, so prints appeared to have all five options inverted; every queue item required hand-editing before dispatch. The manualPOST /print-queue/endpoint reads these fields off the request body (the frontend pulls them from settings before submitting), but the VP-FTP-receive path atbackend/app/services/virtual_printer/manager.py:_add_to_print_queueconstructedPrintQueueItemwithout touching them at all — SQLAlchemy then filled inbed_levelling=True, flow_cali=False, vibration_cali=True, layer_inspect=False, timelapse=Falseregardless of what was in the DB. Fix readsdefault_bed_levelling/default_flow_cali/default_vibration_cali/default_layer_inspect/default_timelapsevia the existingget_setting()helper (same pattern already used in the function forvirtual_printer_archive_name_source) and passes them explicitly toPrintQueueItem. A small_bool_setting()helper mapsNone → AppSettings schema default, so a fresh install with no workflow page customization behaves identically to before. Regression tests:test_add_to_print_queue_uses_workflow_defaults_from_settings(verifies all five settings flow through with values opposite to the column defaults, matching the reporter's exact scenario) andtest_add_to_print_queue_falls_back_to_schema_defaults_when_unset(verifies the no-DB-row path). - Linking a Spoolman spool to an AMS-HT slot no longer fails with a CHECK constraint error (#1274, reported by guillaume.houba) — On H2C / H2D, AMS-HT units report
ams_id128+ (one ams_id per unit, single tray). Thespoolman_slot_assignmentstable'sck_ams_id_rangeconstraint only allowed 0-7 (standard AMS) or 255 (external), so the upsert onPOST /spoolman/inventory/slot-assignmentsblew up withIntegrityError: CHECK constraint failed: ck_ams_id_rangeand the user had no way to link any spool to an AMS-HT slot. Widened the constraint formula to(ams_id >= 0 AND ams_id <= 7) OR (ams_id >= 128 AND ams_id <= 191) OR ams_id = 255— matches the value range the internalspool_assignmenttable already accepts and leaves room for up to 64 AMS-HT units (the existingbambu_mqtt/usage-tracker code uses the same 128-based addressing). Updated in the ORM model (models/spoolman_slot_assignment.py) and both the SQLite/PostgresCREATE TABLEDDL incore/database.py. New idempotent migration_migrate_widen_spoolman_slot_ams_id_range: Postgres path runsDROP CONSTRAINT IF EXISTS+ADD CONSTRAINT(no data risk — the new formula is strictly wider than the old); SQLite path detects the stale formula insqlite_master, table-rebuilds via the standard_v2rename pattern used elsewhere in this file (_migrate_update_auto_link_constraintatdatabase.py:418), and leaves pre-constraint legacy tables untouched. Tests:test_ams_id_check_admits_ams_ht_range(ORM + DDL formula) andtest_assign_accepts_ams_ht_id(end-to-endPOST /slot-assignmentswithams_id=128). - X2D live camera stream no longer cut by Obico polling / snapshot capture (#1271, reported by clabeuhtegrite) — The MJPEG fan-out broadcaster from #1089 lets multiple browser viewers share one upstream RTSP socket per printer, but internal callers (Obico AI polling at the user's configured
obico_poll_interval, and the manual/camera/snapshotendpoint) still opened their own fresh RTSP connections. X1C / H2D / P2S firmware tolerates brief concurrent camera sockets so the gap was invisible there. X2D firmware01.01.00.00(and likely future firmwares) enforces strict single-camera-connection more aggressively: every Obico poll (default every 5 s) kicked the live stream, the broadcaster paid the multi-second RTSP handshake to reconnect, and the user saw the stream cut "all the time." New helpertry_get_active_buffered_frame(printer_id)atapi/routes/camera.py:74returns the broadcaster's last buffered frame (always <1 s old while any viewer is connected) andNonewhen no viewer is active. Obico's_capture_frameand the/camera/snapshotendpoint check it first and only fall through to a fresh socket when no stream is running — preserving today's behavior when nobody is watching.plate_detectionandlayer_timelapsedeliberately not converted: plate-detection needs guaranteed-fresh frames post-print (false-positive risk if the user already grabbed the print in the same second), and layer-timelapse is for external cameras only. Regression tests:test_camera_snapshot_reuses_buffered_frame_when_stream_activeand twoTestCaptureFrameSharesBroadcasterUpstreamObico tests. - Usage tracker: spool swaps in UNUSED slots mid-print no longer charge the old spool (#1269, reported by maugsburger) — Path 2 of the usage tracker (AMS remain% delta fallback) iterated every AMS tray that had a remain% delta, even slots the print never touched. When a user swapped spools in an unrelated slot during a print, the new spool reports
remain=0(no RFID tag yet) while the snapshot from print-start was 100%, so the fallback charged the originally-assigned spool the full 1000 g. Reporter's case: single-filament print on AMS0-T3 (ams_mapping=[3]), swapped a spool in T1 and another in T2 to refill while the print continued — wound up withSpool 27 consumed 1000.0g (100%) on printer 1 AMS0-T1andSpool 24 consumed 170.0g (17%) on printer 1 AMS0-T2, neither of which were ever in the print. Fix: the fallback now buildsprint_used_keysfromsession.ams_mapping,state.tray_change_log, andsession.tray_now_at_start(the three runtime signals telling us which trays were actually part of the print), converts each global tray ID to(ams_id, tray_id)using the standard convention (254/255 → external, ≥128 → AMS-HT, otherwiseid // 4, id % 4), and skips fallback for trays whose key is not in that set. When all three signals are empty (legacy edge case: no slicer push, no MQTT tray-change events, notray_nowat start) the legacy "scan every tray" behavior is preserved so we don't regress prints with no metadata. Regression test intest_usage_tracker.py::test_skips_fallback_for_trays_outside_print_mappingreproduces the reporter's exact scenario. - Printer card: smart-plug live wattage now rounded to whole watts (#1266, reported by Carter3DP) — The printer card's smart-plug status badge rendered
plugStatus.energy.powerraw, so plugs that report fractional watts (Kauf PLF12 via ESPHome / Home Assistant in the reporter's case, but any MQTT plug pushing a float can hit this) showed values like14.123456789012W and overflowed the card width.SmartPlugCardandSwitchbarPopoveralready wrapped the same field inMath.round(); only the printer-card badge was missing the round. Single-line fix atfrontend/src/pages/PrintersPage.tsx:4569.
Added
- Build-plate icon on archive cards + uniform printer/model line (#1253, reported by tonygauderman) — Archive cards now show an OrcaSlicer-style bed icon in the printer/model row indicating which build plate the print was sliced for (Cool / Cool SuperTack / Engineering / High Temp / Textured PEI / Smooth PEI), with the full plate name in the hover tooltip. Closes the gap where users had to remember which plate matched a re-print or open the source 3MF in a slicer just to read the bed setting. Card row also unified: archives with a real Bambuddy-printer association used to render as
H2D-1 GCODE …while slicer-only uploads rendered asSliced for X1C GCODE …— same line, two different shapes. Dropped theSliced forprefix so both render as a uniform<name-or-model> [bed-icon] GCODE <hash>row, scanning the same regardless of provenance. Backend: newbed_typecolumn onprint_archives(idempotentALTER TABLEmigration; SQLite + Postgres safe), populated fromcurr_bed_typeinMetadata/slice_info.config(per-plate metadata, the authoritative source — that's the bed type that actually got sent to the printer for the exported plate) with a fallback toMetadata/project_settings.config's top-levelcurr_bed_typefor older 3MF shapes. Wired through both code paths that produce archive responses:archive_to_response()(the hand-rolled dict converter atarchives.py:97— easy to miss, the schema-only change is silently dropped by Pydantic since the route bypassesfrom_attributes) and the/rescanendpoint, so old archives can be re-parsed by the user via the existing per-archive Rescan button. Newly-ingested archives get the value automatically. Backfill script:scripts/backfill_archive_bed_type.py(with--dry-run) re-opens every NULL archive's 3MF on disk and populates the column — opt-in for users who want their entire history covered without waiting for natural turnover. Auto-loads.envfrom project root before importing backend modules (sincecore/config.py:52readsDATABASE_URLfromos.environat import time, not frompydantic-settingsatSettings()time), prints the resolved DB URL with credentials redacted on every run so operators can confirm they're hitting the intended database (Postgres / SQLite — Bambuddy supports both per #1219'sDATABASE_URLpathway), and callsinit_db()itself before querying so the migration applies even if the script is run against a database the backend hasn't touched yet. Frontend: 6 OrcaSlicer-style PNGs ship infrontend/public/img/bed/(under/img/because that path was already statically mounted atmain.py:5244; the/bed-icons/toplevel attempted first hit the SPA catch-all and returnedindex.htmlastext/html, which the browser then rendered nothing for). Newutils/bedType.tsmaps slicer strings (case-insensitive) to icon + human-readable label; covers Bambu Studio and OrcaSlicer's diverging spellings for the same physical plate (e.g.Cool Plate↔PC Plate,Cool Plate (SuperTack)↔Supertack Plate↔Bambu Cool Plate SuperTack). Renders on both card-grid view and list view inArchivesPage.tsx. Unmapped or NULLbed_typesimply omits the icon, so cards stay clean for archives created before this change. Note on icon mapping:bed_pei.png→ Textured PEI,bed_pei_cool.png→ Smooth PEI is a best-guess from the OrcaSlicer asset names — swap the two paths inbedType.tsif a future user reports the icons reversed for their plate. - Spool labels: new 40×30 mm template, hex colour code, bolder brand line (#809 follow-up, requested by oliboehm) — Three small enhancements to the spool-label printer rolled into one change. (1) New
box_40x30template — 40×30 mm single label, common DK/Brother roll size. Added to_SINGLE_LABEL_SIZES_MMinbackend/app/services/label_renderer.pyand to the request body'sLiteral[...]enum inbackend/app/api/routes/labels.py; height is ≥ 20 mm so it routes through the existing roomy layout (swatch + QR + full text column). (2) Colour hex code on every label — new_hex_code_label()helper formatsdata.rgbaas#RRGGBB(alpha-stripped, uppercased to match the inventory UI's colour-picker convention) and returns""for missing/malformed input so the caller skips drawing instead of throwing. Rendered as a small line under the material/subtype line in the roomy layout, and as a third line above the spool ID in the tight (AMS) layout — useful when several near-identical material/colour spools sit next to each other in the AMS or on a shelf. (3) Brand line bigger + bold — the brand on every label now renders inHelvetica-Boldinstead ofHelveticaregular, with size bumped 5.5pt → 6.5pt on the tight layout and 7pt → 8pt on the roomy layout, so it's the most legible non-ID field at arm's length. Wiring:SpoolLabelTemplateunion infrontend/src/api/client.tsextended with'box_40x30';LabelTemplatePickerModalgets a newTEMPLATE_OPTIONSentry for it;inventory.labels.templates.box40x30.{label,hint}keys added across all 8 locales (en + de fully translated, fr/it/ja/pt-BR/zh-CN/zh-TW translated to native, with the existing per-key fallback in the modal as a safety net). The 5-template grid still wraps to 2 columns on small viewports per #1230's fix; modal regression test was widened from4to5template buttons. Tests:ALL_TEMPLATESparametrize tuple intest_label_renderer.pyextended withbox_40x30so all 7 generic invariants (PDF header, empty-input, multi-colour, missing-fields, malformed-rgba, long strings, sheet pagination) cover the new template; newtest_hex_color_code_rendered_when_rgba_set(asserts#F5E6D3appears in the uncompressed PDF for both 40×30 and 62×29),test_hex_color_code_skipped_when_rgba_invalid(regex pin: no#RRGGBBshape on the label when rgba is malformed, except the spool ID's#42), andtest_brand_rendered_in_bold_per_809_followup(assertsHelvetica-Boldfont reference is in the PDF — caught a regression if the brand line ever reverts to regular weight). All 33 backend tests + 15 frontend modal tests pass; ruff clean. - Copy spool — duplicate any spool's settings into a fresh inventory row in two clicks (#1234, PR #1246 by MiguelAngelLV) — Adds a copy button (
Copyicon) next to the existing edit button on every spool in the inventory page across all three views (table row, card, grouped table inner row). Clicking it opens the existingSpoolFormModalpre-filled with every field from the source spool — material, brand, color, slicer preset, label/core/cost, K-profiles, all of it — exceptweight_usedwhich is reset to 0 (since the new spool starts full) and the RFID identity fields (tag_uid,tray_uuid,tag_type,data_origin) which aren't part of the form payload anyway, so the new spool is its own physical roll. Save callsapi.createSpool(orapi.createSpoolmanInventorySpoolin Spoolman mode — both inherit the dispatch routing for free). Closes the long-running gap where users with many near-identical spools (e.g. five 1 kg PETG-CF rolls bought in a single order) had to re-enter every field from scratch on each one. Implementation shape:SpoolFormModalProps.mode: 'create' | 'edit' | 'copy'(exported asSpoolFormMode) replaces the previousisEditing = !!spoolheuristic — every existing call site inInventoryPage.tsxwas updated to pass the explicit mode, and the modal's title / submit-button label / weight-reset gate / submit-route branching all key onmodedirectly. TheonCopycallback is optional onSpoolCard,SpoolTableRow, andSpoolTableGroup(matches the existingonPrintLabel?pattern), so the button is conditionally rendered and other consumers of those subcomponents don't get a copy affordance forced on them. Card-view and table-row buttons stop click propagation so clicking copy doesn't also fire the parent row's edit handler. Quick Add interaction: the Quick Add toggle is gatedmode === 'create'(was!isEditing), so it stays out of copy mode — otherwise a user could enable Quick Add and bump quantity to N under the singular "Copy Spool" title and silently bulk-create N copies viabulkCreateMutation. i18n: newinventory.copySpoolkey across all 8 locales (en + de translated, fr/it/ja/pt-BR/zh-CN/zh-TW seeded with English fallback per project flow). Tests: 3 new inSpoolFormModal.test.tsx(SpoolFormModal copy modedescribe block — title shows "Copy Spool", save callscreateSpoolnotupdateSpool,weight_usedreset to 0 in the create payload when copying a spool with non-zero usage), 2 new inInventoryPageCopyButton.test.tsx(table-row copy button click → "Copy Spool" heading, cards-view copy button click → same heading after switching view modes) — guards against the three call sites drifting apart. ExistingSpoolFormBulk.test.tsxandSpoolFormModal.test.tsxrenders that omitted themodeprop were updated with the explicitmode="create"so the tightened Quick Add gate doesn't hide the toggle from them. BothInventoryPageCopyButton.test.tsxandInventoryPageDeepLink.test.tsxgained MSW handlers for the modal's open-time fetches (/api/v1/cloud/status,/api/v1/cloud/local-presets,/api/v1/cloud/builtin-filaments,/api/v1/inventory/color-catalog,/api/v1/inventory/spool-catalog,/api/v1/printers/) — without them MSW passes through to the real network, ECONNREFUSEs, and the rejected fetch resolves after the test environment is torn down, surfacing as a flaky "window is not defined" unhandled rejection in the modal'ssetLoadingCloudPresets(false)finally block (pre-existing flake hit ~1 in 3 full-suite runs at PR head).
Fixed
.bbscfgPrinter Preset Bundle import was broken for every user since launch — sidecar compose file pointed at the wrong branch (#1312, reported by hasmar04, confirmed by netscout2001) —slicer-api/docker-compose.yml'sbuild.contextpointed athttps://github.com/maziggy/orca-slicer-api.git#bambuddy/profile-resolver, but thePOST /profiles/bundleendpoint plus theuploadBundlemulter middleware were only ever committed to a sibling branchbambuddy/bundle-import(commita3172c5, 2026-05-06). Every user who ran the documenteddocker compose up -dgot a sidecar without the bundle endpoint — theirPOST /profiles/bundlefell through to the genericPOST /profiles/:categoryhandler, which either rejected with "Name cannot be empty" (nonameform field sent) or "Invalid file type. Only JSON files are allowed." (the JSON multer filter rejecting the.bbscfg). Fix:bambuddy/bundle-importfast-forward-merged intobambuddy/profile-resolverin the orca-slicer-api repo and pushed, so the compose file's existing branch ref now points at the right commit. No Bambuddy code change. Existing users rebuild withcd slicer-api/ && docker compose --profile bambu build --no-cache --pull && docker compose --profile bambu up -d—--pullis the key flag because BuildKit caches the git fetch context separately from layer caches, so--no-cachealone silently reuses the old branch checkout. New users on 0.2.5+ are unaffected. Lesson on diagnosis flow: the wrong root cause was reported twice during triage before the actual branch mismatch was caught — first as "build a week ago, before the bundle endpoint existed" (correct claim for the wrong branch), then as "rebuild with --pull" (still hit the same bug because the compose file pointed at the branch that never got the work). The reporter's third round of logs — the multer "Only JSON files are allowed" error string fromupload.js:17, which only matchesuploadJsonnotuploadBundle— was the smoking gun that no amount of rebuilding would help because the wired-up branch genuinely lacked the endpoint.
Changed
- Support bundle records slicer-API CLI versions; wiki sidecar-update docs hardened (#1312 follow-up) — Triage scaffolding added during investigation of the bundle-import bug above. Useful independent of that fix: the next time a user reports a sidecar-related failure, the support bundle will identify which slicer CLI version is actually running without needing a manual
curl /health. Backend: new_fetch_slicer_health(url)helper inbackend/app/api/routes/support.pydoes a 2-second GET on<sidecar>/health, parses the JSON, and walks every non-dataPathkey undercheckslooking for aversionfield — needed because the wrapper labels both bambu-studio-api and orca-slicer-api aschecks.orcaslicerregardless of which CLI is actually bundled (cosmetic wrapper bug, not Bambuddy's)._collect_slicer_api_infonow calls it instead of the bare reachability ping and adds two new fields per side to the integrations block:bambu_studio_version,orcaslicer_version. Captures"unknown"verbatim when the wrapper's--helpregex didn't match (which is itself diagnostic). Behavior preserved on error paths: empty URL returnsNone, connection failure returns{reachable: False, version: None}, malformed/non-200 returns{reachable: True, version: None}so the reviewer can separate network failure from misconfiguration. Trailing-slash in the configured URL is stripped before appending/health. Tests: 9 new inTestFetchSlicerHealth; existingTestCollectSlicerApiInfotests updated to patch_fetch_slicer_healthand assert the new_versionfields. All 62 helper tests pass; ruff clean. Docs:bambuddy-wiki/docs/features/slicer-api.mdgot four additions. (1) Quick Start gains a warning callout that the Compose file builds from a branch tip and a plaindocker compose up -dwill keep using the originally-built image. (2) The Updating section now recommendsdocker compose --profile bambu build --no-cache --pull(both flags) and explains why both matter. (3) New troubleshooting entry for the "Name cannot be empty" / "Only JSON files are allowed".bbscfgimport error. (4) New troubleshooting entry for the orphan-container conflict (container name "/bambu-studio-api" is already in use) that hits users whose existing containers were built from an older compose file with un-prefixed image tags. The pre-existing/health version: "unknown"entry also got a note clarifying that the wrapper mislabels thechecksfield asorcaslicerfor both sidecars — both are cosmetic, not stale-image indicators.
Fixed
- LDAP settings: "Advanced" collapsible section header was always rendering in English regardless of UI language (#1297, reported by Fuechslein) —
LDAPSettings.tsx:352callst('settings.ldap.advanced') || 'Advanced', but the translation key was never defined in any locale file. The|| 'Advanced'fallback kicked in and the header rendered as English in every language. Addedsettings.ldap.advancedto all 8 locales:Advanced(en),Erweitert(de),Avancé(fr),Avanzate(it),詳細設定(ja),Avançado(pt-BR),高级(zh-CN),進階(zh-TW). No component change needed — the fallback now never triggers because the key resolves properly. i18n parity check holds at 4754 leaves across all locales.
Changelog truncated — see the full CHANGELOG.md for the complete list.