Note
This is a daily beta build (2026-05-17). It contains the latest fixes and improvements but may have undiscovered issues.
Docker users: Update by pulling the new image:
docker pull ghcr.io/maziggy/bambuddy:daily
or
docker pull maziggy/bambuddy:daily
**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.
Fixed
- Adding a printer with a wrong access code (or unreachable IP) no longer creates an empty card — Several support reports traced back to a single root cause: the user mistyped their access code in the Add Printer dialog,
POST /printers/happily persisted the row, the subsequentprinter_manager.connect_printer()call was fire-and-forget so the failure was invisible, and the dashboard ended up showing a printer card that could never display state. The create route now runsprinter_manager.test_connection()(the same MQTT probe the standalone Test Connection button has always used) BEFORE inserting the row, and refuses with HTTP 400 if the probe fails. The Printer row is never written on failure. Structured error response: backend returns{detail: {code: "printer_connection_failed", message: "..."}}rather than a plain English string — the newApiError.codefield on the frontend lets the toast layer pick a localizedprinters.toast.connectionFailedNotAddedkey instead of surfacing the English fallback. Existing tests kept green via an autouse_mock_printer_test_connectionfixture intest_printers_api.pythat defaults the probe to success; a newtest_create_printer_rejects_when_mqtt_probe_failsasserts the failure path returns 400, surfaces the stable code, AND verifies the row was not persisted (the critical part — earlier versions of the regression would have passed even if we'd left the row behind). 8 new i18n translations forprinters.toast.connectionFailedNotAddedacross all 8 locales; parity holds at 4831 leaves. 28 printer-route tests green.
Changed
- GitHub backup: save-failure messages render inline on the card instead of as a toast — The new "repository is not private" rejection message is ~250 chars listing every credential the backup carries, which clips badly in a toast. Both the initial-setup save and the debounced autosave now stash the backend's error message into a new
saveErrorstate and render it as a red inline banner above the test-result block, withwhitespace-pre-wrapso the full message stays readable. The banner clears on a successful save, on the next save attempt, and as soon as the user starts editing the URL / token / provider (the three fields whose changes invalidate the privacy check) — so it doesn't linger after the user has already addressed the cause. Short success toasts ("Settings saved", "Token updated", "Backup enabled") are unchanged. Manual dismiss button included for users who want to clear it without retrying.
Security
- GitHub backup refuses to save against a non-private repository — While auditing real-world Bambuddy backup repos on GitHub I found several that were left public by their owners. That's a serious data leak: the settings backup only filtered
bambu_cloud_tokenandauth_secret_key, somqtt_username,mqtt_password,ha_token,prometheus_token,bambu_cloud_email,external_url, and the printer access codes (via K-profiles, which carry the serial number) were going to whatever visibility the user picked when they created the repo. Fix is a hard guard at every save and re-checked on every push:POST /github-backup/configandPATCH /github-backup/config(when the URL, token, or provider changes) run a connection test internally and return HTTP 400 unlessis_privatecomes back True. Same check fires insiderun_backup()before every scheduled or manual push, so a repository that was private at config time but later flipped to public in the provider's UI gets a clear "Backup aborted: the target repository is no longer private" failure entry instead of leaking the next backup. Implementation: each provider'stest_connection(GitHubBackend,ForgejoBackendoverride,GitLabBackendoverride;GiteaBackendinherits unchanged) now returnsis_private: bool | None—Truefor confirmed private,Falsefor public (or GitLab'sinternal),Nonefor "couldn't determine" (older self-hosted APIs, non-2xx responses). The route helper_enforce_private_reporejects anything that isn'tTrue, with separate error messages for the public case ("Make the repository private...") vs the unknown-visibility case ("...could not confirm..."). Frontend test-connection UI now renders the visibility result inline — green check + "Repository is private — safe to back up to" when confirmed, red banner with the full list of credentials at risk + "Saving is blocked until..." when public, yellow banner + "could not determine" when null. Three new i18n keys (repoIsPrivate,repoIsPublicWarning,repoVisibilityUnknown) translated across all 8 locales; parity holds at 4830 leaves. Wikidocs/features/backup.mdgains a top-level!!! danger "Private repositories only"block listing what's at stake and what to do if the user already has a public backup repo, plus every per-provider setup step is updated from "(can be private)" to "(must be private)". Tests: 5 new intest_github_backup_api.py::TestGitHubBackupPrivateRepoGuard— create rejects public (400 + "not private" in detail), create rejects unknown visibility (400 + "could not confirm"), create rejects failed test_connection (400 + propagates the underlying message), PATCH that changes the URL re-runs the check and rejects on public, PATCH that touches an unrelated field (e.g.schedule_enabled) does NOT calltest_connection(proven via a mock that raises if called — without the field-change gate, every benign toggle would trigger a live API call). The existing 15 tests now use an autouse fixture that mockstest_connectionto return private-success so they don't try to reach github.com. 4905 backend tests green.
Fixed
- Inventory: "Print labels…" now works in Spoolman mode — Both endpoints already exist (
POST /inventory/labelsfor the built-in table,POST /spoolman/labelsfor Spoolman), and theLabelTemplatePickerModalcorrectly branches on aspoolmanModeprop. But the modal was instantiated inInventoryPage.tsxwithspoolmanMode={false}hard-coded, with a stale comment from the original PR claiming "Spoolman path hands users an iframe straight to Spoolman so the per-spool button never shows in that context". That assumption stopped being true when the unified inventory UI shipped — the per-spool button DOES show in Spoolman mode now, but every click resolved to/inventory/labelswith Spoolman spool IDs and returned404 Spool(s) not found. Fix passes the actualspoolmanModevalue through to the modal (one-line change, plus removing the stale comment block). The existingLabelTemplatePickerModal.test.tsxalready covers both branches at the component level — the gap was that no test exercised the InventoryPage wiring. This is another instance of the parity rule from [#1390 follow-up]: inventory features must ship the same UX in both modes; per the new feedback memory, any future inventory change gets a mental checklist of both routes + both client methods + both UI gates before being considered shipped.
Added
- Inventory: "Reset usage to 0" also works in Spoolman mode (#1390 follow-up) — The first cut of this action only wired the built-in inventory path, so Spoolman users saw the eraser icon disappear when they switched modes. Now the same two endpoints exist on the Spoolman inventory router:
POST /spoolman/inventory/spools/{spool_id}/reset-usagePATCHes Spoolman's/spool/{id}withused_weight: 0for a single spool,POST /spoolman/inventory/spools/reset-usage-bulkdoes the same per ID across an explicit list and returns{reset: N}(individual Spoolman failures are logged and counted out, the batch keeps going). Areset_spool_usage(spool_id)helper onSpoolmanClientis the actual HTTP call. The mutations inInventoryPage.tsxalready had the right shape — they now switch onspoolmanModeto pickapi.resetSpoolmanInventorySpoolUsage/api.bulkResetSpoolmanInventorySpoolUsagevs the internal-inventory client methods, and the threespoolmanMode ? undefined : ...gates that hid the eraser buttons in Spoolman mode are gone. Three new tests intest_spoolman_inventory_api.pylock the Spoolman path (per-spool, bulk, and the typo-wipe guard on empty list). The wiki page now says "Spoolman users get the same actions" instead of the original "Spoolman-mode users don't see either button" note. 4900 backend tests green. - Inventory: "Reset usage to 0" per spool and across all active spools (#1390 follow-up, requested by IndividualGhost1905) — Each spool's
weight_usedcounter accumulates over its lifetime and feeds the "Total Consumed (Since tracking started)" stat on the Inventory page. There was no way to clear it without nuking the spool or manually editing the field — and manually settingweight_used=0via PATCH /spools/{id} auto-locks the spool (weight_locked=trueis auto-set wheneverweight_usedis sent explicitly, so AMS auto-sync stops touching the spool), which is the wrong behaviour for "clean-slate my Total Consumed stat so future prints track from zero". Two dedicated endpoints inbackend/app/api/routes/inventory.pyzero the counter without touching the lock flag:POST /inventory/spools/{spool_id}/reset-usage(single spool) returns the updatedSpoolResponse;POST /inventory/spools/reset-usage-bulk({spool_ids: [int, ...]}) returns{reset: N}. The bulk endpoint rejects empty / missingspool_ids(HTTP 400) — no wildcard / "reset-all" shortcut, since a typo there would wipe the entire inventory's tracking; the caller must explicitly pass the list. Both leaveweight_lockedalone: if the user had locked the spool, the lock stays; if it was unlocked, it stays unlocked and the next AMS sync picks up from zero. Frontend adds two affordances: a small eraser icon button on the "Total Consumed" stat card (visible only when there's actually usage to reset AND we're not in Spoolman mode) that opens a confirm modal explaining what the reset clears and that the spools / remaining weights are not changed, and an eraser icon in each table row's action column (visible only on active spools withweight_used > 0, hidden in Spoolman mode since Spoolman manages its own usage accounting). Both routes share the sameConfirmModalinfrastructure as delete/archive —confirmActionstate now covers'delete' | 'archive' | 'reset-usage' | 'reset-all-usage'. i18n: 10 new keys (resetUsage,resetUsageTooltip,resetUsageConfirm,resetAllUsage,resetAllUsageTooltip,resetAllUsageConfirm,usageReset,allUsageReset,resetUsageFailed, plusresetUsagereused as confirm button label) translated across all 8 locales (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW). Parity check holds at 4827 leaves per locale. Tests: 8 new regressions intest_spool_reset_usage.pycover per-spool reset zeroesweight_used, per-spool reset does NOT auto-lock, per-spool reset preserves an existing lock, 404 for missing spool, bulk reset zeroes only listed spools (untouched spools keep their usage — the typo-wipe guard), bulk reset rejects empty list (400), bulk reset rejects missingspool_idsfield (400), bulk reset preservesweight_lockedacross mixed locked/unlocked targets. 4897 backend + 1901 frontend tests green.
Changed
- Settings → Filament: "Spool Catalog" now shows the same UI in Spoolman mode as in internal-inventory mode — Previously, switching to Spoolman mode hijacked the Spool Catalog card and replaced it with a Spoolman filament list (Vendor — Name / Material / Weight / Spool Weight) with inline edit for name + spool_weight. Two separate concepts had been merged into one card: a Bambuddy-local spool tare catalog (the actual purpose of the card — name + weight definitions used to compute spool tare) vs a filament editor for Spoolman's
Filamententity. The filament-editor view replaced the spool tare table entirely in Spoolman mode, with no way to see or manage the spool catalog. Now the card always renders the local Spool Catalog (Add / Edit / Delete / Export / Import / Reset / bulk-delete) regardless of inventory mode. The Spoolman-filament inline editor is removed — Spoolman users edit filament name / spool_weight in Spoolman's own UI. Side effect of the rewrite: the noisyGET /api/v1/spoolman/inventory/filaments → 400 Bad Requestthat fired on the Filament settings page even when Spoolman is disabled is gone, because the component no longer issues the probe at all. Files affected:frontend/src/components/SpoolCatalogSettings.tsx(rewrite, ~750 → ~445 lines),frontend/src/components/SpoolWeightUpdateModal.tsx(deleted — only used by the removed editor), test file rewritten to match the simplified component. No backend changes —PATCH /spoolman/inventory/filaments/{id}route still exists for API consumers, just no longer wired to a UI.
Fixed
- Stats page widgets now match Quick Stats — every panel reads per-event data (#1390 follow-up, reported by IndividualGhost1905) — After #1378 moved Quick Stats and the run aggregates to
print_log_entries, six widgets (Filament Used, Filament Cost, Filament Trends, Printer Stats By Weight / Time, By Material, Color Distribution) plus Failure Analysis still iterated the archive list. Two divergences fell out of that split. Reprints: each reprint of an archive adds a newprint_log_entriesrow but theprint_archivesrow gets overwritten in place, so event-based widgets counted N reprints while archive-based widgets counted 1. Hard-deleted archives: the foreign key isON DELETE SET NULL, so the event survives as an orphan (archive_id=NULL) — Quick Stats kept counting it, archive-iterating widgets couldn't see it. The reporter's test server (14 archives / 52 events / 29 orphans confirmed by the diagnostic query) made the split very visible. Fix swaps the data source in two places: (1)GET /archives/slim(the only frontend caller is StatsPage, so every widget that consumes thearchivesquery gets the per-event data in one step) now reads fromPrintLogEntry, LEFT JOINsPrintArchivefor the slicedprint_time_secondsestimate (null for orphans, and downstream widgets already fall back toactual_time_seconds/duration_seconds), usesPrintLogEntry.duration_secondsas the authoritative measured-time field when present (the original computed-from-started/completed_at path is kept as the fallback so legacy event rows from pre-#1378 still surface time), and returnsquantity=1per event since per-event semantics make the archive-level quantity multiplier meaningless (verified no StatsPage widget actually readsquantity—grep -n "\\.quantity" frontend/src/pages/StatsPage.tsxreturns nothing); (2)FailureAnalysisServiceswitched fromPrintArchivetoPrintLogEntryfor every aggregation (totals, by reason, by filament, by printer, by hour, recent failures, weekly trend) —project_idfiltering still resolves through the archive table (events don't carry a direct project link) but counts the matching events, not the archives. The conftestarchive_factoryalready synthesizes a matchingPrintLogEntryper archive (added when #1378 landed), so existing tests stay green; one small tweak there now syncs the synthesized event'screated_atwith the archive's so date-range filtered tests don't lose the event toserver_default=func.now(). Three new regressions intest_archives_api.py:test_slim_counts_reprints_as_separate_rows(three reprints → three slim rows → 3× filament summed correctly),test_slim_includes_orphan_events(archive deleted, event survives, slim still returns it withprint_time_seconds=null),test_failure_analysis_counts_reprints_and_orphans(a reprint of a failed archive + an orphan failed event both contribute tofailed_printsandfailures_by_reason). One existing assertion updated — thetest_slim_returns_only_expected_fieldstest was assertingquantity == 2from anarchive_factory(..., quantity=2)call, which no longer rounds-trips through the per-event endpoint; updated toquantity == 1with a comment pointing at the semantic shift. 4889 backend tests green, 31 StatsPage frontend tests green, ruff clean. - FTP upload no longer silently treats 426 "Failure reading network stream" as success (#1401, second root cause reported by iitazz) — Looking at the support bundle from iitazz showed every FTP upload to their P2S (firmware 01.02.00.00) ending the same way: data channel sendall completes in ~200 ms at an impossibly high "speed" (7+ MB/s for files the printer can only actually receive at ~1–2 MB/s), then voidresp returns
426 Failure reading network stream. (error_temp)from the printer, and Bambuddy proceeds —WARNING FTP STOR confirmation not received for X (proceeding): 426 ...followed immediately byINFO FTP upload complete. The print command then gets dispatched, the printer tries to parse what's actually a partial 3MF (the reporter's downloaded-from-printer 3MF was 458752 bytes — exactly7 × 65536, our FTP chunk size — for a 668025-byte source), and surfaces the "unable to parse 3mf file" error the reporter sees. Two stacked failures: a P2S firmware / TLS-data-channel quirk that severs the FTP data stream mid-transfer (separate investigation; #1401 doesn't fix that), AND the voidresp handler inbackend/app/services/bambu_ftp.pyswallowing the resulting 426 because the original comment assumed "the data was fully sent so the file is likely on the SD card" — true for socket-level timeouts where we just didn't HEAR the 226 in time (H2D needs 30+ s tolerance and we want to keep that), false for426where the printer is explicitly telling us the data stream itself was cut. Fix splits the broadexcept Exceptioninto two branches:except ftplib.Error(coverserror_reply,error_temp,error_perm,error_proto— the server responded with a failure on the control channel) logs at ERROR and re-raises, so the outerexcept (OSError, ftplib.Error)returns False and the dispatcher sees a real upload failure instead of green-lighting a print of a truncated file;except Exceptionkeeps the existing proceed-with-warning behaviour for socket timeouts so the H2D 30-second voidresp tolerance survives. Same split applied toupload_bytes()since it had the sameexcept Exception: passshape. The reporter will still hit the underlying 426 (we haven't fixed the P2S transport problem yet — that's separate), but they'll now see an upload failure surfaced honestly rather than a confusing parse error 30 seconds into the print attempt. Tests: two new regressions inTestUploadpatch_ftp.voidrespto raiseftplib.error_temp("426 ...")and assert bothupload_file()andupload_bytes()return False. 18 upload-related tests green. The earlier-this-section validation fix is unrelated and stays — it still catches genuinely raw.gcodefiles at the upload step. - Upload validation rejects unprintable 3MF / raw-gcode files at the upload step instead of letting them fail at the printer (#1401, reported by iitazz) — Reporter sliced in OrcaSlicer, uploaded the result to Bambuddy, clicked Print, and the printer rejected with "Printing stopped because the printer was unable to parse the 3mf file" — every time, for multiple files, on both library uploads and SD-card-browsed files. Trace through the support bundle showed: (a) the stored library file ended in
.gcode(not.gcode.3mf), and (b)background_dispatch.pyconstructs the FTP destination filename by appending.3mfwhen the source doesn't already end in.gcode.3mf/.3mf— so raw gcode gets shipped to the printer namedwhatever.gcode.3mfand the firmware's 3MF parser chokes on the missing zip header. The same shape also manifests asFailed to parse plates from archive ... File is not a zip filewarnings on Bambuddy's side. Whether the user manually re-extensioned a file or their slicer saved as.gcodeinstead of.gcode.3mf, the right place to catch this is the upload, not the printer 30 seconds later. Newvalidate_print_file_upload()helper inbackend/app/api/routes/library.pyruns two checks: (1) reject any filename ending in.gcode(but not.gcode.3mf) with a clear message — "Raw .gcode files can't be printed on Bambu printers in network mode — they need a .gcode.3mf zip container (gcode plus metadata). Re-export from your slicer and make sure the file ends in '.gcode.3mf', not just '.gcode'. If your OS hides extensions, double-check the file with the extension visible." (2) For any filename ending in.3mf(incl. the compound.gcode.3mf), verify the file body starts withPK\x03\x04(ZIP magic bytes); reject otherwise with a message pointing at the slicer's "Export Plate Sliced File" action. Suffix-based check rather thanos.path.splitextbecause compound extensions like.gcode.3mfshow up as just.3mfafter splitext — both must trigger the same validation. Applied to every relevant upload route:POST /library/files(covers File Manager upload AND the printer-card drag-drop, which routes through the same endpoint),POST /archives/upload(single archive),POST /archives/upload-bulk(rejects bad files per-row instead of aborting the batch — one bad file in a 10-file drag-drop doesn't lose the other nine),POST /archives/{archive_id}/source(per-archive source 3MF),POST /archives/upload-source(slicer-post-processing match-by-name). Validation runs AFTER_resolve_upload_destinationso folder-permission rejections (403 readonly, 400 missing-path, 409 collision) still take precedence — preserves existing error ordering. STL / image / other non-print uploads bypass the validator entirely; Bambuddy is also a library, not just a print dispatcher. Frontend visibility fix inFileUploadModal.tsx(same component used by File Manager + Printers page + Archives): the modal auto-closed aftersetIsUploading(false)regardless of per-file results, so a 400 rejection from the new validator was technically captured but never shown — the modal vanished too quickly. Now (a) errors render inline as red text under the file row instead of as a hover-onlytitletooltip, and (b) the modal stays open if any file ended with status='error', so the user can read the backend's actual remediation message before clicking Close. The bulk archiveUploadModal.tsxwas already showing inline errors and not auto-closing — that one didn't need the fix. Tests: 7 new integration tests inTestPrintFileUploadValidationcover: raw.gcoderejection at the library route (asserts the error message names the remedy), non-zip.3mfrejection, non-zip.gcode.3mfrejection (compound-extension code path), happy-path valid.gcode.3mfaccepted, STL / non-print extensions still bypass,POST /archives/uploadnon-zip rejection,POST /archives/upload-bulkper-file error collection with mixed good/bad files in one request. Plus one fixture update intest_external_folders_api.py—test_upload_persists_correct_db_shapewas uploadingmodel.3mfwith placeholder bytesb"x"to exercise the DB-shape path; updated to use a minimal real zip so the new validator doesn't block the unrelated test. 4968 backend tests green, 41 FileUploadModal frontend tests green, ruff + frontend build clean.
Added
- Inventory: Storage Location filter chip (#1400, reported by pgladel) — Reporter manages a lot of physical filament storage locations and wanted a quick way to narrow the inventory list to "what's in shelf A" / "what's in drawer 1" without typing a search query each time. Inventory page grows a new filter chip alongside the existing Material / Brand / Category / Spool Name dropdowns. Distinct storage-location values are pulled from the spool list and rendered as options; selecting one filters the table to spools assigned to that location. An additional No location set entry appears when at least one spool has an empty
storage_location, so users can find unfiled spools the same waycategoryNoneworks for unfiled categories. The chip self-hides when no spool has a storage location set (avoids noise on fresh installs). Pattern is identical to the existing Category chip from #729 — clear-all-filters andhasActiveFiltersboth include the new state. Whitespace normalisation: distinct-value extraction and filter comparison both.trim()the field so a spool whose location was saved as"Shelf A "doesn't render as a separate dropdown option from"Shelf A". i18n: reuses the existinginventory.storageLocationlabel (already shipped for the spool-edit field — no duplication); adds a newinventory.storageLocationNonekey, translated to all 8 locales (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW). The "Extended Solution" from the issue (dashboard widget showing locations) is not in this change — open to revisiting if there's appetite. Parity check holds at 4818 leaves per locale. 24 InventoryPage tests in the existing suite still pass. - Smart plugs: auto-off after AMS drying completes (#1349, reported by Kyobinoyo) — Reporter asked for the equivalent of the existing print-finish auto-off, but triggered when an AMS drying cycle ends — so the smart plug that powers the printer + AMS combo cuts power once humidity has been driven out, without the user babysitting it. Shipped as a simple per-plug pair of fields that mirrors the existing print-finish auto-off shape. Per-AMS plug routing (separate plug for the AMS only, per-AMS targeting on dual-AMS printers) was scoped out for now — Bambuddy's plug model is plug→printer, not plug→AMS, so the trigger fires whenever any AMS attached to the linked printer finishes a dry cycle. Two new SmartPlug columns with a same-migration block in
database.py(SQLite usesBOOLEAN DEFAULT 0/INTEGER DEFAULT 10; Postgres branches toDEFAULT false/IF NOT EXISTS):auto_off_after_drying BOOLEAN(defaults False so nobody opts in by accident);off_delay_after_drying_minutes INTEGER(defaults 10 — separate from the print-finish delay because the AMS chamber is hot post-cycle and users often want longer cooldown than the print-finish default of 5). Trigger is observed at the MQTT layer, not the scheduler —BambuMQTTClientnow keeps a per-AMS_previous_dry_times: dict[int, int]and, every time_handle_ams_datafinalises the merged AMS list, walks each unit looking for thedry_time > 0 → 0falling edge. When it fires, the newon_drying_complete(ams_id)callback runs, plumbed throughPrinterManager.set_drying_complete_callbackexactly the wayon_print_start/on_print_completealready are. The seed-from-zero false positive (first MQTT push reportsdry_time=0and the previous would otherwise read as 0→0) is guarded by the explicitprevious > 0check, and the per-AMS state means dual-AMS printers can finish drying on AMS 0 and AMS 1 independently without the second one missing the edge. Observing the falling edge at the MQTT layer (rather than inprint_scheduler._sync_drying_state) is deliberate: the scheduler's_drying_in_progressdict only tracks auto-drying initiated by the scheduler itself, so manually-triggered drying from the printer card would not fire there. The new path catches queue-triggered, ambient, AND manual drying identically because it observes firmware-reported state, not our own intent. Manager hook inSmartPlugManager.on_drying_complete(printer_id, db)mirrorson_print_completebut reads the drying-specific toggle, calls_schedule_delayed_offwithoff_delay_after_drying_minutes(always time-based — temperature-cooldown is meaningful for the printer hotend, not the AMS chamber, and Bambuddy doesn't track AMS chamber temperature). The HA-script guard from the print-finish path is preserved (scripts can be triggered but not turned off, so they're skipped). Frontend adds a single toggle + delay input on the Smart Plug card next to the existing "Auto Off" section: "Auto Off After Drying" and "Drying delay (minutes)". No changes to the Add Smart Plug modal beyond what the new fields require. Backend tests intest_smart_plug_manager.pycover the new shape: drying auto-off schedules with the correct per-plug delay; the toggle being off is a no-op even whenauto_off(print-finish) is on; the masterenabledflag still gates; HA script entities are skipped; printer with no linked plugs is a silent no-op.test_bambu_mqtt.pygets a newTestDryingCompleteCallbackclass covering the falling-edge firing once, the seed-from-zero non-fire guard, repeated zero-pushes after the edge not refiring, per-AMS independent tracking on dual-AMS units, and the "new cycle after completion refires" case (covers the user starting a second dry from the printer card). 4961 backend tests green; SQLite + Postgres 16 migration verified idempotent. i18n: 3 new keys (autoOffAfterDrying,autoOffAfterDryingDescription,delayAfterDryingMinutes) translated across all 8 locales (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW). Parity check holds at 4817 leaves per locale.
Changed
- Bulk and scheduled archive purge now honour the soft / hard delete choice that single-archive delete already exposes (#1390 follow-up) — Reporter IndividualGhost1905 followed up after the #1378 / #1343 backfill fix landed and pointed out the next inconsistency: the per-archive delete dialog has had a "Also remove from Quick Stats" checkbox since #1343, but the bulk "Purge Old" button and the scheduled daily auto-purge sweeper both ignored that choice and hard-deleted unconditionally. The "Purge Old" path called
archive_purge_service.purge_older_thanwhich routed throughArchiveService.delete_archivedirectly — dropped the archive row, the linked PrintLogEntry rows gotON DELETE SET NULLso they survived witharchive_id=NULL, Quick Stats kept the filament / cost / energy contribution from the orphaned log rows but the archive-list-iterating widgets (Filament Trends / Printer Stats / By Material / Color Distribution) lost the contribution and Time Accuracy lost the join target. Visibly inconsistent, and "automatically deleted from statistics without any warning" was a fair characterisation of the half that did drop. Fix is to thread the samepurge_statsparameter through every surface, defaulting to soft-delete (matches the single-archive default — files off disk, archive row hidden viadeleted_at, Quick Stats fully preserved, all archive-list widgets keep showing the row). Three surfaces affected: (1)POST /archives/purgeacceptspurge_statsin the body, defaults False (soft); the response now echoes which mode ran. (2)GET /archives/purge/previewaccepts the same flag as a query param so the count matches what a real purge would touch — soft mode excludes already-soft-deleted rows, hard mode counts them as eligible-for-promotion. (3) The auto-purgearchive_auto_purge_statssetting (default False) controls whether the daily sweeper runs in soft or hard mode; the existing_maybe_run_auto_purgereads it on every tick.ArchivePurgeRequest/ArchivePurgeSettingsschemas extended,archive_purge_service.purge_older_thanandpreview_purgetakepurge_stats=Falsekwarg, the existing single-row delete tests pass unchanged. Frontend: "Purge old archives" modal grew a checkbox below the preview ("Also remove from statistics" with a hint explaining the difference), and the Settings → Archives auto-purge card grew the matching toggle (disabled when auto-purge itself is off). Copy in the modal rewritten across all 8 locales to reflect that the default no longer "permanently removes from the database" but instead hides + removes files while keeping Quick Stats intact. Behaviour change for existing auto-purge users: the sweeper used to hard-delete by default and now soft-deletes by default. After the upgrade, existing auto-purge users will start preserving more data in Quick Stats rather than losing it — the safer direction of the two, but call it out. Users who want the old hard-delete behaviour can tick the new toggle once. 4 new integration tests intest_archive_purge_api.pypin the new contract: manual purge soft-deletes by default, manual purge hard-deletes whenpurge_stats=truebody flag is set, auto-purge soft-deletes by default, auto-purge hard-deletes when the settings opt-in. Existing throttle/disabled tests still pass. 11 tests total in the file, all green; 4951 in the full backend suite. i18n parity check clean across all 8 locales. - Cloud login: corrected the access-token hint to reflect that Bambu Lab no longer surfaces the token in any UI, and called out the China-region constraint explicitly (#1396) — Reporter wintsa123 filed that China-region users can't log into Bambuddy. The code path itself is fine: PR #1013 (April) already added the China-region selector to the login form and routes token validation to
api.bambulab.cn. The actual gap was documentation. The old in-appaccessTokenHintsaid "Paste your Bambu Lab access token (from Bambu Studio)" — but Bambu Studio never exposed the token in any UI, and the profile page onbambulab.comthat used to show it is gone. For China-region accounts the email/password flow is fundamentally unusable because those accounts are bound to phone numbers, not email — token login is the only path, and the hint didn't say so. UpdatedaccessTokenHintin all 8 locales (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW) to state that China accounts must use this path and point at the wiki for the MakerWorld-cookie retrieval procedure. Wiki pagefeatures/cloud-profiles.mdalso rewritten under "Access Token Login": adds a "Region: China must use token login" note, replaces the dead "from Bambu Studio" guidance with the working MakerWorld-cookie method (browser DevTools → Application → Cookies →token), keeps the Python-script alternative for global-region accounts, and flags that the cookie value is sensitive. No backend changes — the token-validation endpoint accepts bothglobalandchinaregions and routes to the right API host already.
Fixed
- Virtual Printer (queue / immediate / review modes): AMS data flickered or disappeared in BambuStudio between pushalls on P1S/A1 targets (#1387) — Reporter vmhomelab ran a Print Queue VP against a P1S, opened BambuStudio, and saw the External Spool only — no AMS. Toggling Auto-Dispatch (which triggers a VP restart) made AMS briefly appear, then it reverted to defaults. Proxy Mode worked fine. The earlier #1371 sticky-keys fix only handled one of two Bambu firmware incremental-push shapes: it preserved cached AMS when the incoming push omitted the
amskey entirely (H2D's common incremental shape). The reporter's P1S firmware (01.09.01.00) instead sends incrementals with theamskey present but the innerams.amsarray stripped —{ams_status: 1, humidity: 2}instead of{ams: [...], ams_status: 1}. To the previous sticky-keys check that read as "key present, leave new state alone," so the bridge cache got overwritten with the stripped blob; the slicer's next 1 Hz read sawamswith no unit list and fell back to the "no AMS" default render. Toggling Auto-Dispatch restarted the VP and got a fresh pushall in; the next P1S incremental stripped it again. (H2D rarely hits this — its incrementals typically don't carryamsat all, so #1371 alone was enough there. The reporter's same-VP-architecture pinging both an H2D and a P1S would observe the H2D works while the P1S doesn't, which is exactly the split that surfaced this.) Fix is a deep-merge applied to theamskey inside the bridge cache, mirroring the structure Bambuddy itself already does inbambu_mqtt.py::_handle_ams_data(which is why Bambuddy's own AMS display stays coherent on the same firmware): scalar fields likeams_statusandhumiditytake the new value, but theams.amsarray is merged unit-by-unit onid, each unit'strayarray is merged tray-by-tray onid, and units / trays the incremental doesn't mention survive intact from the cached full state. A tray-targeted incremental during a print like{ams: [{id: 0, tray: [{id: 0, state: 11}]}]}now updates that one tray's state without nuking the other three trays' tray_type/tray_color. Helper added as_merge_ams_dictinbackend/app/services/virtual_printer/mqtt_bridge.pynext to_ip_to_uint32_le, called from the existing sticky-keys block. Three new regression tests underTestPushStatusCacheinbackend/tests/unit/test_vp_mqtt_bridge.pycover the status-only partial (the reporter's exact reproduction), the multi-AMS unit-level merge, and the multi-tray merge. The existingtest_incoming_ams_update_replaces_cached_amsstill passes — fresh full updates still take effect, the merge only protects the cache from stripped incrementals. 32 tests total in that file, all green. Verified the cross-subnet topology from the report (printer / Bambuddy / slicer each on a different /24) is incidental: the symptom is the same regardless of subnet once the partial-shape arrives; the latency just makes the "empty cache when slicer first connects" race more visible. ProxyMode is unaffected because Proxy is raw byte-forwarding rather than a cached-as-base mirror — it never had this class of bug. - Quick Stats showed Filament Cost = 0 and empty Time Accuracy on pre-upgrade data after the 0.2.4.1 stats rewrite (#1390) — Reporter IndividualGhost1905 upgraded to 0.2.4.1 (which shipped the per-event aggregation rewrite from #1378) and saw the Stats page split between consistent values (Total Prints / Print Time / Filament Used / Energy / Success Rate matched the archive list) and zero-or-empty ones (Filament Cost, Time Accuracy). Inconsistency was a migration gap: #1378 added six columns to
print_log_entries—archive_id,cost,energy_kwh,energy_cost,failure_reason,created_by_id— but didn't backfill any of them. So every pre-upgrade log entry kept NULL on all six. The new Quick Stats query sumsPrintLogEntry.cost(gets 0 for legacy data); the time-accuracy query joinsPrintArchive ON archive_id(drops every legacy run from the average). Counts and per-row fields that already existed pre-#1378 (status,duration_seconds,filament_used_grams) kept working — which is why some panels looked right and others didn't. Fix is a two-step backfill inrun_migrationsnext to the existing column-add block (DML, runs insidebegin_nested()not_safe_executesince the latter is documented "DDL only"): step 1 links each orphan log entry to its archive viaprint_name + printer_id(highest archiveidwins on tiebreak — newest matches the overwrite-then-stop shape that pre-#1378 reprints left behind); step 2 copiesarchive.cost / energy_kwh / energy_costonto the latest matching log entry per archive, but only for archives where no log entry yet carries a cost. That second clause is the idempotency anchor and also the double-count guard for users running this migration after #1378 has already written cost-bearing rows for new runs — those archives are left untouched. Earlier reprints stay NULL, matching the "first/latest writes, rest stay NULL" convention #1378 introduced. Sum across the legacy reprint chain reproduces sum-of-archive-cost exactly, so the Quick Stats Filament Cost column matches the pre-upgrade total instead of dropping to zero. SQL is plain ANSI — correlated UPDATE withLIMIT 1in the SET subquery,WHERE id IN (SELECT MAX(id) ... GROUP BY archive_id HAVING SUM(CASE WHEN cost IS NOT NULL THEN 1 ELSE 0 END) = 0)— verified end-to-end on both SQLite (4 unit tests intest_print_log_backfill_migration.py) andpostgres:16-alpine + asyncpg(live container reproduction). For the other widgets the reporter listed (Printer Stats, Filament Trends, By Material, Success by Material, Color Distribution) — those still iterate the archives list on the frontend rather than calling /stats, so they read consistent pre-upgrade data and aren't part of this fix; the inconsistency the reporter saw between Quick Stats and those widgets resolves itself once the backfill brings Quick Stats in line. - Spoolman: spool "Color Name" edits silently never saved — Bambuddy was writing to a field Spoolman doesn't have (#1357) — Reporter pgladel edited a spool's Color Name in Spoolman mode, hit Save, and saw the value snap back to the subtype on the next read. Martin shipped #1319 in May to handle "form round-trips the synth value back as if it were user input" — that fix's read/form-prefill half was correct (the
color_name_is_synthesizedflag, the blank-on-synth form init), but the write half assumed Spoolman has acolor_namefield on Filament. It doesn't. Verified against the liveFilamentUpdateParametersschema on Spoolman 0.23.1:name,vendor_id,material,price,density,diameter,weight,spool_weight,article_number,comment,settings_extruder_temp,settings_bed_temp,color_hex,multi_color_hexes,multi_color_direction,external_id,extra— that's the lot. Nocolor_name. Spoolman's PATCH happily returns 200 for{"color_name": "Red"}and just silently discards the unknown key. Sofind_or_create_filamentwas either patching a void or creating filament after filament with the same field-that-doesn't-stick (which is what produced the reporter's "BB also created a bunch of new filaments" trail of duplicates on each save attempt). The fix takes the same route as the existing BambuStudio slicer-preset storage: persist color_name onspool.extra.bambu_color_nameas a JSON-encoded string, register the extra field viaensure_extra_fieldbefore write (Spoolman 400s on unknown extra keys), and read it back in_map_spoolman_spoolwith priorityspool.extra.bambu_color_name → filament.color_name (forward-compat for any future Spoolman release that adds it) → subtype synth. Also dropped the now-deadcolor_namepassing throughfind_or_create_filamentandcreate_filament— Spoolman would discard it anyway and keeping the dead pipe risked the same confusion the next time someone reads this code. The previous "match by name then patch color_name" loop is gone; what survives is the name-match resilience added earlier this turn so an AMS-sync-created filament named"Glow"still matches the user-driven edit's composed"PLA Glow", which prevents the duplicate-filament trail. The frontend form'scolor_name_is_synthesizedhandling is unchanged — that part already worked. Tests rewritten across the three affected suites (test_spoolman_inventory_methods.py,test_spoolman_inventory_helpers.py,test_spoolman_inventory_api.py) to pin the new contract: filament patch never carriescolor_name, route writes tobambu_color_nameextra, read prefers extra over filament-field over synth. Verified end-to-end against the live Spoolman instance at the reporter's setup (PATCH /filament with color_name → field absent from response; PATCH /spool with extra.bambu_color_name → field present in response). - Add Smart Plug (HA mode) — search dropdown let users pick entities the schema would reject, surfacing as a cryptic regex error on Save (#1388) — Reporter MartinNYHC opened the Add Smart Plug dialog, typed a search prefix matching a multi-entity HA device (a Shelly-style outlet exposing one
switch.*and severalsensor.*/binary_sensor.*siblings under the same friendly-name prefix), clicked one of the entities, filled in the optional power/energy sensors, and clicked Save. The backend returned 422 with the raw Pydantic messageString should match pattern '^(switch|light|input_boolean|script)\.[a-z0-9_]+$'. After the dropdown closed and the search cleared, the entity-list refetch (with no search param) returned the default-domain-filtered list — which didn't include the user's pick — soselectedEntity = haEntities.find(...)was undefined, the field rendered as visually empty (placeholder shown), buthaEntityIdstill held the bad value the user had selected. Root cause was atbackend/app/services/homeassistant.py::list_entities: when a search query was present, the function bypassed the domain filter entirely and returned matches across every HA domain — including ones theSmartPlugBase.ha_entity_idregex atbackend/app/schemas/smart_plug.py:17could never accept. Offering a clickable choice the user can't save is broken UX; the fact that the error message then saidswitch|light|input_boolean|scriptmade it look like a schema problem rather than a search-permissiveness problem. Fix: the allowed-domains filter ({"switch", "light", "input_boolean", "script"}, kept in sync with the schema regex) now always runs, and search composes on top of it as an additional substring match againstentity_idorfriendly_name. Whitespace-only search strings are treated as no search. Verified the smart-plug code path is unchanged between 0.2.4 and 0.2.4.1 — this bug was latent since the script-domain commit in February 2026 and was only noticed now because the reporter hadn't reopened the modal in months. 5 new regression tests inbackend/tests/unit/services/test_homeassistant_list_entities.pycover the no-search baseline, the search-still-domain-filters case (the actual #1388 reproduction), the entity_id-or-friendly_name substring match, case-insensitivity, and the whitespace-only edge case. - H2S with no AMS could not start a print — firmware rejected the dispatch with
07FF_8012"Failed to get AMS mapping table" (#1386) — Reporter krootstijn (H2S + no AMS) clicked Print and got an immediate firmware error. Two stacked misclassifications had quietly added H2S to the dual-nozzle code paths over time. The first was instart_print_jobatbackend/app/services/bambu_mqtt.py:3168— theis_h2dflag was set true for("H2D", "H2D PRO", "H2DPRO", "H2C", "H2S", "X2D"). That single flag controlled both the firmware bool→int format (legitimately needed for the whole H-family) and the external-spool routing branch (ext_ams_id = tray_id if is_h2d else 255) which is only correct for actual dual-nozzle printers. With no AMS, the external-spool sentinel is254; the dual-nozzle branch wroteams_id=254intoams_mapping2instead of the canonical255. The exact failure shape (07FF_8012) is even called out in the comment six lines above the bad line — H2S was getting routed straight into the path the comment warned against. The second misclassification was the use_ams=False fallback atbambu_mqtt.py:3213(if ams_mapping and use_ams and not is_h2d) — meant to skip the safety drop on dual-nozzle printers whereuse_amscontrols nozzle routing — also skipped H2S, so the firmware never got a chance to fall back to external-spool mode. A third site atbambu_mqtt.py:3987(and its sibling atbackend/app/api/routes/kprofiles.py:119) classified dual-nozzle by serial prefix("094", "20P9", "31B8B"), which is wrong because H2S shares prefix094with H2D. Fix splits the conflated flag into two:is_h_family(firmware-format gate, includes H2S) andis_dual_nozzle(routing/use_ams gate, excludes H2S; prefers the runtime_is_dual_nozzleflag set fromdevice.extruder.infoand falls back to model name for the brief window right after connect). The K-profile delete and the edit route now use the same two-source check instead of the serial prefix. Empirically verified across 9+ stored H2S support bundles (nozzle_count: 1in every one) and the reporter's bug log (07FF_8012immediately after dispatch). Four new regression tests:test_h2s_single_external_spool_uses_main_id,test_h2s_no_ams_forces_use_ams_false,test_h2s_keeps_integer_format_for_calibration_fields, plus a newtest_h2s_uses_single_nozzle_formatin the K-profile suite. The K-profile detection tests were also updated to set both model name and runtime flag rather than relying on serial prefix, since the source-of-truth has shifted.