github maziggy/bambuddy v0.2.5b1-daily.20260517
Daily Beta Build v0.2.5b1-daily.20260517

pre-releaseone hour ago

Note

This is a daily beta build (2026-05-17). It contains the latest fixes and improvements but may have undiscovered issues.

Docker users: Update by pulling the new image:

docker pull ghcr.io/maziggy/bambuddy:daily

or

docker pull maziggy/bambuddy:daily


**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.

Fixed

  • Adding a printer with a wrong access code (or unreachable IP) no longer creates an empty card — Several support reports traced back to a single root cause: the user mistyped their access code in the Add Printer dialog, POST /printers/ happily persisted the row, the subsequent printer_manager.connect_printer() call was fire-and-forget so the failure was invisible, and the dashboard ended up showing a printer card that could never display state. The create route now runs printer_manager.test_connection() (the same MQTT probe the standalone Test Connection button has always used) BEFORE inserting the row, and refuses with HTTP 400 if the probe fails. The Printer row is never written on failure. Structured error response: backend returns {detail: {code: "printer_connection_failed", message: "..."}} rather than a plain English string — the new ApiError.code field on the frontend lets the toast layer pick a localized printers.toast.connectionFailedNotAdded key instead of surfacing the English fallback. Existing tests kept green via an autouse _mock_printer_test_connection fixture in test_printers_api.py that defaults the probe to success; a new test_create_printer_rejects_when_mqtt_probe_fails asserts the failure path returns 400, surfaces the stable code, AND verifies the row was not persisted (the critical part — earlier versions of the regression would have passed even if we'd left the row behind). 8 new i18n translations for printers.toast.connectionFailedNotAdded across all 8 locales; parity holds at 4831 leaves. 28 printer-route tests green.

Changed

  • GitHub backup: save-failure messages render inline on the card instead of as a toast — The new "repository is not private" rejection message is ~250 chars listing every credential the backup carries, which clips badly in a toast. Both the initial-setup save and the debounced autosave now stash the backend's error message into a new saveError state and render it as a red inline banner above the test-result block, with whitespace-pre-wrap so the full message stays readable. The banner clears on a successful save, on the next save attempt, and as soon as the user starts editing the URL / token / provider (the three fields whose changes invalidate the privacy check) — so it doesn't linger after the user has already addressed the cause. Short success toasts ("Settings saved", "Token updated", "Backup enabled") are unchanged. Manual dismiss button included for users who want to clear it without retrying.

Security

  • GitHub backup refuses to save against a non-private repository — While auditing real-world Bambuddy backup repos on GitHub I found several that were left public by their owners. That's a serious data leak: the settings backup only filtered bambu_cloud_token and auth_secret_key, so mqtt_username, mqtt_password, ha_token, prometheus_token, bambu_cloud_email, external_url, and the printer access codes (via K-profiles, which carry the serial number) were going to whatever visibility the user picked when they created the repo. Fix is a hard guard at every save and re-checked on every push: POST /github-backup/config and PATCH /github-backup/config (when the URL, token, or provider changes) run a connection test internally and return HTTP 400 unless is_private comes back True. Same check fires inside run_backup() before every scheduled or manual push, so a repository that was private at config time but later flipped to public in the provider's UI gets a clear "Backup aborted: the target repository is no longer private" failure entry instead of leaking the next backup. Implementation: each provider's test_connection (GitHubBackend, ForgejoBackend override, GitLabBackend override; GiteaBackend inherits unchanged) now returns is_private: bool | NoneTrue for confirmed private, False for public (or GitLab's internal), None for "couldn't determine" (older self-hosted APIs, non-2xx responses). The route helper _enforce_private_repo rejects anything that isn't True, with separate error messages for the public case ("Make the repository private...") vs the unknown-visibility case ("...could not confirm..."). Frontend test-connection UI now renders the visibility result inline — green check + "Repository is private — safe to back up to" when confirmed, red banner with the full list of credentials at risk + "Saving is blocked until..." when public, yellow banner + "could not determine" when null. Three new i18n keys (repoIsPrivate, repoIsPublicWarning, repoVisibilityUnknown) translated across all 8 locales; parity holds at 4830 leaves. Wiki docs/features/backup.md gains a top-level !!! danger "Private repositories only" block listing what's at stake and what to do if the user already has a public backup repo, plus every per-provider setup step is updated from "(can be private)" to "(must be private)". Tests: 5 new in test_github_backup_api.py::TestGitHubBackupPrivateRepoGuard — create rejects public (400 + "not private" in detail), create rejects unknown visibility (400 + "could not confirm"), create rejects failed test_connection (400 + propagates the underlying message), PATCH that changes the URL re-runs the check and rejects on public, PATCH that touches an unrelated field (e.g. schedule_enabled) does NOT call test_connection (proven via a mock that raises if called — without the field-change gate, every benign toggle would trigger a live API call). The existing 15 tests now use an autouse fixture that mocks test_connection to return private-success so they don't try to reach github.com. 4905 backend tests green.

Fixed

  • Inventory: "Print labels…" now works in Spoolman mode — Both endpoints already exist (POST /inventory/labels for the built-in table, POST /spoolman/labels for Spoolman), and the LabelTemplatePickerModal correctly branches on a spoolmanMode prop. But the modal was instantiated in InventoryPage.tsx with spoolmanMode={false} hard-coded, with a stale comment from the original PR claiming "Spoolman path hands users an iframe straight to Spoolman so the per-spool button never shows in that context". That assumption stopped being true when the unified inventory UI shipped — the per-spool button DOES show in Spoolman mode now, but every click resolved to /inventory/labels with Spoolman spool IDs and returned 404 Spool(s) not found. Fix passes the actual spoolmanMode value through to the modal (one-line change, plus removing the stale comment block). The existing LabelTemplatePickerModal.test.tsx already covers both branches at the component level — the gap was that no test exercised the InventoryPage wiring. This is another instance of the parity rule from [#1390 follow-up]: inventory features must ship the same UX in both modes; per the new feedback memory, any future inventory change gets a mental checklist of both routes + both client methods + both UI gates before being considered shipped.

Added

  • Inventory: "Reset usage to 0" also works in Spoolman mode (#1390 follow-up) — The first cut of this action only wired the built-in inventory path, so Spoolman users saw the eraser icon disappear when they switched modes. Now the same two endpoints exist on the Spoolman inventory router: POST /spoolman/inventory/spools/{spool_id}/reset-usage PATCHes Spoolman's /spool/{id} with used_weight: 0 for a single spool, POST /spoolman/inventory/spools/reset-usage-bulk does the same per ID across an explicit list and returns {reset: N} (individual Spoolman failures are logged and counted out, the batch keeps going). A reset_spool_usage(spool_id) helper on SpoolmanClient is the actual HTTP call. The mutations in InventoryPage.tsx already had the right shape — they now switch on spoolmanMode to pick api.resetSpoolmanInventorySpoolUsage / api.bulkResetSpoolmanInventorySpoolUsage vs the internal-inventory client methods, and the three spoolmanMode ? undefined : ... gates that hid the eraser buttons in Spoolman mode are gone. Three new tests in test_spoolman_inventory_api.py lock the Spoolman path (per-spool, bulk, and the typo-wipe guard on empty list). The wiki page now says "Spoolman users get the same actions" instead of the original "Spoolman-mode users don't see either button" note. 4900 backend tests green.
  • Inventory: "Reset usage to 0" per spool and across all active spools (#1390 follow-up, requested by IndividualGhost1905) — Each spool's weight_used counter accumulates over its lifetime and feeds the "Total Consumed (Since tracking started)" stat on the Inventory page. There was no way to clear it without nuking the spool or manually editing the field — and manually setting weight_used=0 via PATCH /spools/{id} auto-locks the spool (weight_locked=true is auto-set whenever weight_used is sent explicitly, so AMS auto-sync stops touching the spool), which is the wrong behaviour for "clean-slate my Total Consumed stat so future prints track from zero". Two dedicated endpoints in backend/app/api/routes/inventory.py zero the counter without touching the lock flag: POST /inventory/spools/{spool_id}/reset-usage (single spool) returns the updated SpoolResponse; POST /inventory/spools/reset-usage-bulk ({spool_ids: [int, ...]}) returns {reset: N}. The bulk endpoint rejects empty / missing spool_ids (HTTP 400) — no wildcard / "reset-all" shortcut, since a typo there would wipe the entire inventory's tracking; the caller must explicitly pass the list. Both leave weight_locked alone: if the user had locked the spool, the lock stays; if it was unlocked, it stays unlocked and the next AMS sync picks up from zero. Frontend adds two affordances: a small eraser icon button on the "Total Consumed" stat card (visible only when there's actually usage to reset AND we're not in Spoolman mode) that opens a confirm modal explaining what the reset clears and that the spools / remaining weights are not changed, and an eraser icon in each table row's action column (visible only on active spools with weight_used > 0, hidden in Spoolman mode since Spoolman manages its own usage accounting). Both routes share the same ConfirmModal infrastructure as delete/archive — confirmAction state now covers 'delete' | 'archive' | 'reset-usage' | 'reset-all-usage'. i18n: 10 new keys (resetUsage, resetUsageTooltip, resetUsageConfirm, resetAllUsage, resetAllUsageTooltip, resetAllUsageConfirm, usageReset, allUsageReset, resetUsageFailed, plus resetUsage reused as confirm button label) translated across all 8 locales (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW). Parity check holds at 4827 leaves per locale. Tests: 8 new regressions in test_spool_reset_usage.py cover per-spool reset zeroes weight_used, per-spool reset does NOT auto-lock, per-spool reset preserves an existing lock, 404 for missing spool, bulk reset zeroes only listed spools (untouched spools keep their usage — the typo-wipe guard), bulk reset rejects empty list (400), bulk reset rejects missing spool_ids field (400), bulk reset preserves weight_locked across mixed locked/unlocked targets. 4897 backend + 1901 frontend tests green.

Changed

  • Settings → Filament: "Spool Catalog" now shows the same UI in Spoolman mode as in internal-inventory mode — Previously, switching to Spoolman mode hijacked the Spool Catalog card and replaced it with a Spoolman filament list (Vendor — Name / Material / Weight / Spool Weight) with inline edit for name + spool_weight. Two separate concepts had been merged into one card: a Bambuddy-local spool tare catalog (the actual purpose of the card — name + weight definitions used to compute spool tare) vs a filament editor for Spoolman's Filament entity. The filament-editor view replaced the spool tare table entirely in Spoolman mode, with no way to see or manage the spool catalog. Now the card always renders the local Spool Catalog (Add / Edit / Delete / Export / Import / Reset / bulk-delete) regardless of inventory mode. The Spoolman-filament inline editor is removed — Spoolman users edit filament name / spool_weight in Spoolman's own UI. Side effect of the rewrite: the noisy GET /api/v1/spoolman/inventory/filaments → 400 Bad Request that fired on the Filament settings page even when Spoolman is disabled is gone, because the component no longer issues the probe at all. Files affected: frontend/src/components/SpoolCatalogSettings.tsx (rewrite, ~750 → ~445 lines), frontend/src/components/SpoolWeightUpdateModal.tsx (deleted — only used by the removed editor), test file rewritten to match the simplified component. No backend changes — PATCH /spoolman/inventory/filaments/{id} route still exists for API consumers, just no longer wired to a UI.

Fixed

  • Stats page widgets now match Quick Stats — every panel reads per-event data (#1390 follow-up, reported by IndividualGhost1905) — After #1378 moved Quick Stats and the run aggregates to print_log_entries, six widgets (Filament Used, Filament Cost, Filament Trends, Printer Stats By Weight / Time, By Material, Color Distribution) plus Failure Analysis still iterated the archive list. Two divergences fell out of that split. Reprints: each reprint of an archive adds a new print_log_entries row but the print_archives row gets overwritten in place, so event-based widgets counted N reprints while archive-based widgets counted 1. Hard-deleted archives: the foreign key is ON DELETE SET NULL, so the event survives as an orphan (archive_id=NULL) — Quick Stats kept counting it, archive-iterating widgets couldn't see it. The reporter's test server (14 archives / 52 events / 29 orphans confirmed by the diagnostic query) made the split very visible. Fix swaps the data source in two places: (1) GET /archives/slim (the only frontend caller is StatsPage, so every widget that consumes the archives query gets the per-event data in one step) now reads from PrintLogEntry, LEFT JOINs PrintArchive for the sliced print_time_seconds estimate (null for orphans, and downstream widgets already fall back to actual_time_seconds / duration_seconds), uses PrintLogEntry.duration_seconds as the authoritative measured-time field when present (the original computed-from-started/completed_at path is kept as the fallback so legacy event rows from pre-#1378 still surface time), and returns quantity=1 per event since per-event semantics make the archive-level quantity multiplier meaningless (verified no StatsPage widget actually reads quantitygrep -n "\\.quantity" frontend/src/pages/StatsPage.tsx returns nothing); (2) FailureAnalysisService switched from PrintArchive to PrintLogEntry for every aggregation (totals, by reason, by filament, by printer, by hour, recent failures, weekly trend) — project_id filtering still resolves through the archive table (events don't carry a direct project link) but counts the matching events, not the archives. The conftest archive_factory already synthesizes a matching PrintLogEntry per archive (added when #1378 landed), so existing tests stay green; one small tweak there now syncs the synthesized event's created_at with the archive's so date-range filtered tests don't lose the event to server_default=func.now(). Three new regressions in test_archives_api.py: test_slim_counts_reprints_as_separate_rows (three reprints → three slim rows → 3× filament summed correctly), test_slim_includes_orphan_events (archive deleted, event survives, slim still returns it with print_time_seconds=null), test_failure_analysis_counts_reprints_and_orphans (a reprint of a failed archive + an orphan failed event both contribute to failed_prints and failures_by_reason). One existing assertion updated — the test_slim_returns_only_expected_fields test was asserting quantity == 2 from an archive_factory(..., quantity=2) call, which no longer rounds-trips through the per-event endpoint; updated to quantity == 1 with a comment pointing at the semantic shift. 4889 backend tests green, 31 StatsPage frontend tests green, ruff clean.
  • FTP upload no longer silently treats 426 "Failure reading network stream" as success (#1401, second root cause reported by iitazz) — Looking at the support bundle from iitazz showed every FTP upload to their P2S (firmware 01.02.00.00) ending the same way: data channel sendall completes in ~200 ms at an impossibly high "speed" (7+ MB/s for files the printer can only actually receive at ~1–2 MB/s), then voidresp returns 426 Failure reading network stream. (error_temp) from the printer, and Bambuddy proceeds — WARNING FTP STOR confirmation not received for X (proceeding): 426 ... followed immediately by INFO FTP upload complete. The print command then gets dispatched, the printer tries to parse what's actually a partial 3MF (the reporter's downloaded-from-printer 3MF was 458752 bytes — exactly 7 × 65536, our FTP chunk size — for a 668025-byte source), and surfaces the "unable to parse 3mf file" error the reporter sees. Two stacked failures: a P2S firmware / TLS-data-channel quirk that severs the FTP data stream mid-transfer (separate investigation; #1401 doesn't fix that), AND the voidresp handler in backend/app/services/bambu_ftp.py swallowing the resulting 426 because the original comment assumed "the data was fully sent so the file is likely on the SD card" — true for socket-level timeouts where we just didn't HEAR the 226 in time (H2D needs 30+ s tolerance and we want to keep that), false for 426 where the printer is explicitly telling us the data stream itself was cut. Fix splits the broad except Exception into two branches: except ftplib.Error (covers error_reply, error_temp, error_perm, error_proto — the server responded with a failure on the control channel) logs at ERROR and re-raises, so the outer except (OSError, ftplib.Error) returns False and the dispatcher sees a real upload failure instead of green-lighting a print of a truncated file; except Exception keeps the existing proceed-with-warning behaviour for socket timeouts so the H2D 30-second voidresp tolerance survives. Same split applied to upload_bytes() since it had the same except Exception: pass shape. The reporter will still hit the underlying 426 (we haven't fixed the P2S transport problem yet — that's separate), but they'll now see an upload failure surfaced honestly rather than a confusing parse error 30 seconds into the print attempt. Tests: two new regressions in TestUpload patch _ftp.voidresp to raise ftplib.error_temp("426 ...") and assert both upload_file() and upload_bytes() return False. 18 upload-related tests green. The earlier-this-section validation fix is unrelated and stays — it still catches genuinely raw .gcode files at the upload step.
  • Upload validation rejects unprintable 3MF / raw-gcode files at the upload step instead of letting them fail at the printer (#1401, reported by iitazz) — Reporter sliced in OrcaSlicer, uploaded the result to Bambuddy, clicked Print, and the printer rejected with "Printing stopped because the printer was unable to parse the 3mf file" — every time, for multiple files, on both library uploads and SD-card-browsed files. Trace through the support bundle showed: (a) the stored library file ended in .gcode (not .gcode.3mf), and (b) background_dispatch.py constructs the FTP destination filename by appending .3mf when the source doesn't already end in .gcode.3mf / .3mf — so raw gcode gets shipped to the printer named whatever.gcode.3mf and the firmware's 3MF parser chokes on the missing zip header. The same shape also manifests as Failed to parse plates from archive ... File is not a zip file warnings on Bambuddy's side. Whether the user manually re-extensioned a file or their slicer saved as .gcode instead of .gcode.3mf, the right place to catch this is the upload, not the printer 30 seconds later. New validate_print_file_upload() helper in backend/app/api/routes/library.py runs two checks: (1) reject any filename ending in .gcode (but not .gcode.3mf) with a clear message — "Raw .gcode files can't be printed on Bambu printers in network mode — they need a .gcode.3mf zip container (gcode plus metadata). Re-export from your slicer and make sure the file ends in '.gcode.3mf', not just '.gcode'. If your OS hides extensions, double-check the file with the extension visible." (2) For any filename ending in .3mf (incl. the compound .gcode.3mf), verify the file body starts with PK\x03\x04 (ZIP magic bytes); reject otherwise with a message pointing at the slicer's "Export Plate Sliced File" action. Suffix-based check rather than os.path.splitext because compound extensions like .gcode.3mf show up as just .3mf after splitext — both must trigger the same validation. Applied to every relevant upload route: POST /library/files (covers File Manager upload AND the printer-card drag-drop, which routes through the same endpoint), POST /archives/upload (single archive), POST /archives/upload-bulk (rejects bad files per-row instead of aborting the batch — one bad file in a 10-file drag-drop doesn't lose the other nine), POST /archives/{archive_id}/source (per-archive source 3MF), POST /archives/upload-source (slicer-post-processing match-by-name). Validation runs AFTER _resolve_upload_destination so folder-permission rejections (403 readonly, 400 missing-path, 409 collision) still take precedence — preserves existing error ordering. STL / image / other non-print uploads bypass the validator entirely; Bambuddy is also a library, not just a print dispatcher. Frontend visibility fix in FileUploadModal.tsx (same component used by File Manager + Printers page + Archives): the modal auto-closed after setIsUploading(false) regardless of per-file results, so a 400 rejection from the new validator was technically captured but never shown — the modal vanished too quickly. Now (a) errors render inline as red text under the file row instead of as a hover-only title tooltip, and (b) the modal stays open if any file ended with status='error', so the user can read the backend's actual remediation message before clicking Close. The bulk archive UploadModal.tsx was already showing inline errors and not auto-closing — that one didn't need the fix. Tests: 7 new integration tests in TestPrintFileUploadValidation cover: raw .gcode rejection at the library route (asserts the error message names the remedy), non-zip .3mf rejection, non-zip .gcode.3mf rejection (compound-extension code path), happy-path valid .gcode.3mf accepted, STL / non-print extensions still bypass, POST /archives/upload non-zip rejection, POST /archives/upload-bulk per-file error collection with mixed good/bad files in one request. Plus one fixture update in test_external_folders_api.pytest_upload_persists_correct_db_shape was uploading model.3mf with placeholder bytes b"x" to exercise the DB-shape path; updated to use a minimal real zip so the new validator doesn't block the unrelated test. 4968 backend tests green, 41 FileUploadModal frontend tests green, ruff + frontend build clean.

Added

  • Inventory: Storage Location filter chip (#1400, reported by pgladel) — Reporter manages a lot of physical filament storage locations and wanted a quick way to narrow the inventory list to "what's in shelf A" / "what's in drawer 1" without typing a search query each time. Inventory page grows a new filter chip alongside the existing Material / Brand / Category / Spool Name dropdowns. Distinct storage-location values are pulled from the spool list and rendered as options; selecting one filters the table to spools assigned to that location. An additional No location set entry appears when at least one spool has an empty storage_location, so users can find unfiled spools the same way categoryNone works for unfiled categories. The chip self-hides when no spool has a storage location set (avoids noise on fresh installs). Pattern is identical to the existing Category chip from #729 — clear-all-filters and hasActiveFilters both include the new state. Whitespace normalisation: distinct-value extraction and filter comparison both .trim() the field so a spool whose location was saved as "Shelf A " doesn't render as a separate dropdown option from "Shelf A". i18n: reuses the existing inventory.storageLocation label (already shipped for the spool-edit field — no duplication); adds a new inventory.storageLocationNone key, translated to all 8 locales (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW). The "Extended Solution" from the issue (dashboard widget showing locations) is not in this change — open to revisiting if there's appetite. Parity check holds at 4818 leaves per locale. 24 InventoryPage tests in the existing suite still pass.
  • Smart plugs: auto-off after AMS drying completes (#1349, reported by Kyobinoyo) — Reporter asked for the equivalent of the existing print-finish auto-off, but triggered when an AMS drying cycle ends — so the smart plug that powers the printer + AMS combo cuts power once humidity has been driven out, without the user babysitting it. Shipped as a simple per-plug pair of fields that mirrors the existing print-finish auto-off shape. Per-AMS plug routing (separate plug for the AMS only, per-AMS targeting on dual-AMS printers) was scoped out for now — Bambuddy's plug model is plug→printer, not plug→AMS, so the trigger fires whenever any AMS attached to the linked printer finishes a dry cycle. Two new SmartPlug columns with a same-migration block in database.py (SQLite uses BOOLEAN DEFAULT 0 / INTEGER DEFAULT 10; Postgres branches to DEFAULT false / IF NOT EXISTS): auto_off_after_drying BOOLEAN (defaults False so nobody opts in by accident); off_delay_after_drying_minutes INTEGER (defaults 10 — separate from the print-finish delay because the AMS chamber is hot post-cycle and users often want longer cooldown than the print-finish default of 5). Trigger is observed at the MQTT layer, not the scheduler — BambuMQTTClient now keeps a per-AMS _previous_dry_times: dict[int, int] and, every time _handle_ams_data finalises the merged AMS list, walks each unit looking for the dry_time > 0 → 0 falling edge. When it fires, the new on_drying_complete(ams_id) callback runs, plumbed through PrinterManager.set_drying_complete_callback exactly the way on_print_start / on_print_complete already are. The seed-from-zero false positive (first MQTT push reports dry_time=0 and the previous would otherwise read as 0→0) is guarded by the explicit previous > 0 check, and the per-AMS state means dual-AMS printers can finish drying on AMS 0 and AMS 1 independently without the second one missing the edge. Observing the falling edge at the MQTT layer (rather than in print_scheduler._sync_drying_state) is deliberate: the scheduler's _drying_in_progress dict only tracks auto-drying initiated by the scheduler itself, so manually-triggered drying from the printer card would not fire there. The new path catches queue-triggered, ambient, AND manual drying identically because it observes firmware-reported state, not our own intent. Manager hook in SmartPlugManager.on_drying_complete(printer_id, db) mirrors on_print_complete but reads the drying-specific toggle, calls _schedule_delayed_off with off_delay_after_drying_minutes (always time-based — temperature-cooldown is meaningful for the printer hotend, not the AMS chamber, and Bambuddy doesn't track AMS chamber temperature). The HA-script guard from the print-finish path is preserved (scripts can be triggered but not turned off, so they're skipped). Frontend adds a single toggle + delay input on the Smart Plug card next to the existing "Auto Off" section: "Auto Off After Drying" and "Drying delay (minutes)". No changes to the Add Smart Plug modal beyond what the new fields require. Backend tests in test_smart_plug_manager.py cover the new shape: drying auto-off schedules with the correct per-plug delay; the toggle being off is a no-op even when auto_off (print-finish) is on; the master enabled flag still gates; HA script entities are skipped; printer with no linked plugs is a silent no-op. test_bambu_mqtt.py gets a new TestDryingCompleteCallback class covering the falling-edge firing once, the seed-from-zero non-fire guard, repeated zero-pushes after the edge not refiring, per-AMS independent tracking on dual-AMS units, and the "new cycle after completion refires" case (covers the user starting a second dry from the printer card). 4961 backend tests green; SQLite + Postgres 16 migration verified idempotent. i18n: 3 new keys (autoOffAfterDrying, autoOffAfterDryingDescription, delayAfterDryingMinutes) translated across all 8 locales (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW). Parity check holds at 4817 leaves per locale.

Changed

  • Bulk and scheduled archive purge now honour the soft / hard delete choice that single-archive delete already exposes (#1390 follow-up) — Reporter IndividualGhost1905 followed up after the #1378 / #1343 backfill fix landed and pointed out the next inconsistency: the per-archive delete dialog has had a "Also remove from Quick Stats" checkbox since #1343, but the bulk "Purge Old" button and the scheduled daily auto-purge sweeper both ignored that choice and hard-deleted unconditionally. The "Purge Old" path called archive_purge_service.purge_older_than which routed through ArchiveService.delete_archive directly — dropped the archive row, the linked PrintLogEntry rows got ON DELETE SET NULL so they survived with archive_id=NULL, Quick Stats kept the filament / cost / energy contribution from the orphaned log rows but the archive-list-iterating widgets (Filament Trends / Printer Stats / By Material / Color Distribution) lost the contribution and Time Accuracy lost the join target. Visibly inconsistent, and "automatically deleted from statistics without any warning" was a fair characterisation of the half that did drop. Fix is to thread the same purge_stats parameter through every surface, defaulting to soft-delete (matches the single-archive default — files off disk, archive row hidden via deleted_at, Quick Stats fully preserved, all archive-list widgets keep showing the row). Three surfaces affected: (1) POST /archives/purge accepts purge_stats in the body, defaults False (soft); the response now echoes which mode ran. (2) GET /archives/purge/preview accepts the same flag as a query param so the count matches what a real purge would touch — soft mode excludes already-soft-deleted rows, hard mode counts them as eligible-for-promotion. (3) The auto-purge archive_auto_purge_stats setting (default False) controls whether the daily sweeper runs in soft or hard mode; the existing _maybe_run_auto_purge reads it on every tick. ArchivePurgeRequest / ArchivePurgeSettings schemas extended, archive_purge_service.purge_older_than and preview_purge take purge_stats=False kwarg, the existing single-row delete tests pass unchanged. Frontend: "Purge old archives" modal grew a checkbox below the preview ("Also remove from statistics" with a hint explaining the difference), and the Settings → Archives auto-purge card grew the matching toggle (disabled when auto-purge itself is off). Copy in the modal rewritten across all 8 locales to reflect that the default no longer "permanently removes from the database" but instead hides + removes files while keeping Quick Stats intact. Behaviour change for existing auto-purge users: the sweeper used to hard-delete by default and now soft-deletes by default. After the upgrade, existing auto-purge users will start preserving more data in Quick Stats rather than losing it — the safer direction of the two, but call it out. Users who want the old hard-delete behaviour can tick the new toggle once. 4 new integration tests in test_archive_purge_api.py pin the new contract: manual purge soft-deletes by default, manual purge hard-deletes when purge_stats=true body flag is set, auto-purge soft-deletes by default, auto-purge hard-deletes when the settings opt-in. Existing throttle/disabled tests still pass. 11 tests total in the file, all green; 4951 in the full backend suite. i18n parity check clean across all 8 locales.
  • Cloud login: corrected the access-token hint to reflect that Bambu Lab no longer surfaces the token in any UI, and called out the China-region constraint explicitly (#1396) — Reporter wintsa123 filed that China-region users can't log into Bambuddy. The code path itself is fine: PR #1013 (April) already added the China-region selector to the login form and routes token validation to api.bambulab.cn. The actual gap was documentation. The old in-app accessTokenHint said "Paste your Bambu Lab access token (from Bambu Studio)" — but Bambu Studio never exposed the token in any UI, and the profile page on bambulab.com that used to show it is gone. For China-region accounts the email/password flow is fundamentally unusable because those accounts are bound to phone numbers, not email — token login is the only path, and the hint didn't say so. Updated accessTokenHint in all 8 locales (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW) to state that China accounts must use this path and point at the wiki for the MakerWorld-cookie retrieval procedure. Wiki page features/cloud-profiles.md also rewritten under "Access Token Login": adds a "Region: China must use token login" note, replaces the dead "from Bambu Studio" guidance with the working MakerWorld-cookie method (browser DevTools → Application → Cookies → token), keeps the Python-script alternative for global-region accounts, and flags that the cookie value is sensitive. No backend changes — the token-validation endpoint accepts both global and china regions and routes to the right API host already.

Fixed

  • Virtual Printer (queue / immediate / review modes): AMS data flickered or disappeared in BambuStudio between pushalls on P1S/A1 targets (#1387) — Reporter vmhomelab ran a Print Queue VP against a P1S, opened BambuStudio, and saw the External Spool only — no AMS. Toggling Auto-Dispatch (which triggers a VP restart) made AMS briefly appear, then it reverted to defaults. Proxy Mode worked fine. The earlier #1371 sticky-keys fix only handled one of two Bambu firmware incremental-push shapes: it preserved cached AMS when the incoming push omitted the ams key entirely (H2D's common incremental shape). The reporter's P1S firmware (01.09.01.00) instead sends incrementals with the ams key present but the inner ams.ams array stripped — {ams_status: 1, humidity: 2} instead of {ams: [...], ams_status: 1}. To the previous sticky-keys check that read as "key present, leave new state alone," so the bridge cache got overwritten with the stripped blob; the slicer's next 1 Hz read saw ams with no unit list and fell back to the "no AMS" default render. Toggling Auto-Dispatch restarted the VP and got a fresh pushall in; the next P1S incremental stripped it again. (H2D rarely hits this — its incrementals typically don't carry ams at all, so #1371 alone was enough there. The reporter's same-VP-architecture pinging both an H2D and a P1S would observe the H2D works while the P1S doesn't, which is exactly the split that surfaced this.) Fix is a deep-merge applied to the ams key inside the bridge cache, mirroring the structure Bambuddy itself already does in bambu_mqtt.py::_handle_ams_data (which is why Bambuddy's own AMS display stays coherent on the same firmware): scalar fields like ams_status and humidity take the new value, but the ams.ams array is merged unit-by-unit on id, each unit's tray array is merged tray-by-tray on id, and units / trays the incremental doesn't mention survive intact from the cached full state. A tray-targeted incremental during a print like {ams: [{id: 0, tray: [{id: 0, state: 11}]}]} now updates that one tray's state without nuking the other three trays' tray_type/tray_color. Helper added as _merge_ams_dict in backend/app/services/virtual_printer/mqtt_bridge.py next to _ip_to_uint32_le, called from the existing sticky-keys block. Three new regression tests under TestPushStatusCache in backend/tests/unit/test_vp_mqtt_bridge.py cover the status-only partial (the reporter's exact reproduction), the multi-AMS unit-level merge, and the multi-tray merge. The existing test_incoming_ams_update_replaces_cached_ams still passes — fresh full updates still take effect, the merge only protects the cache from stripped incrementals. 32 tests total in that file, all green. Verified the cross-subnet topology from the report (printer / Bambuddy / slicer each on a different /24) is incidental: the symptom is the same regardless of subnet once the partial-shape arrives; the latency just makes the "empty cache when slicer first connects" race more visible. ProxyMode is unaffected because Proxy is raw byte-forwarding rather than a cached-as-base mirror — it never had this class of bug.
  • Quick Stats showed Filament Cost = 0 and empty Time Accuracy on pre-upgrade data after the 0.2.4.1 stats rewrite (#1390) — Reporter IndividualGhost1905 upgraded to 0.2.4.1 (which shipped the per-event aggregation rewrite from #1378) and saw the Stats page split between consistent values (Total Prints / Print Time / Filament Used / Energy / Success Rate matched the archive list) and zero-or-empty ones (Filament Cost, Time Accuracy). Inconsistency was a migration gap: #1378 added six columns to print_log_entriesarchive_id, cost, energy_kwh, energy_cost, failure_reason, created_by_id — but didn't backfill any of them. So every pre-upgrade log entry kept NULL on all six. The new Quick Stats query sums PrintLogEntry.cost (gets 0 for legacy data); the time-accuracy query joins PrintArchive ON archive_id (drops every legacy run from the average). Counts and per-row fields that already existed pre-#1378 (status, duration_seconds, filament_used_grams) kept working — which is why some panels looked right and others didn't. Fix is a two-step backfill in run_migrations next to the existing column-add block (DML, runs inside begin_nested() not _safe_execute since the latter is documented "DDL only"): step 1 links each orphan log entry to its archive via print_name + printer_id (highest archive id wins on tiebreak — newest matches the overwrite-then-stop shape that pre-#1378 reprints left behind); step 2 copies archive.cost / energy_kwh / energy_cost onto the latest matching log entry per archive, but only for archives where no log entry yet carries a cost. That second clause is the idempotency anchor and also the double-count guard for users running this migration after #1378 has already written cost-bearing rows for new runs — those archives are left untouched. Earlier reprints stay NULL, matching the "first/latest writes, rest stay NULL" convention #1378 introduced. Sum across the legacy reprint chain reproduces sum-of-archive-cost exactly, so the Quick Stats Filament Cost column matches the pre-upgrade total instead of dropping to zero. SQL is plain ANSI — correlated UPDATE with LIMIT 1 in the SET subquery, WHERE id IN (SELECT MAX(id) ... GROUP BY archive_id HAVING SUM(CASE WHEN cost IS NOT NULL THEN 1 ELSE 0 END) = 0) — verified end-to-end on both SQLite (4 unit tests in test_print_log_backfill_migration.py) and postgres:16-alpine + asyncpg (live container reproduction). For the other widgets the reporter listed (Printer Stats, Filament Trends, By Material, Success by Material, Color Distribution) — those still iterate the archives list on the frontend rather than calling /stats, so they read consistent pre-upgrade data and aren't part of this fix; the inconsistency the reporter saw between Quick Stats and those widgets resolves itself once the backfill brings Quick Stats in line.
  • Spoolman: spool "Color Name" edits silently never saved — Bambuddy was writing to a field Spoolman doesn't have (#1357) — Reporter pgladel edited a spool's Color Name in Spoolman mode, hit Save, and saw the value snap back to the subtype on the next read. Martin shipped #1319 in May to handle "form round-trips the synth value back as if it were user input" — that fix's read/form-prefill half was correct (the color_name_is_synthesized flag, the blank-on-synth form init), but the write half assumed Spoolman has a color_name field on Filament. It doesn't. Verified against the live FilamentUpdateParameters schema on Spoolman 0.23.1: name, vendor_id, material, price, density, diameter, weight, spool_weight, article_number, comment, settings_extruder_temp, settings_bed_temp, color_hex, multi_color_hexes, multi_color_direction, external_id, extra — that's the lot. No color_name. Spoolman's PATCH happily returns 200 for {"color_name": "Red"} and just silently discards the unknown key. So find_or_create_filament was either patching a void or creating filament after filament with the same field-that-doesn't-stick (which is what produced the reporter's "BB also created a bunch of new filaments" trail of duplicates on each save attempt). The fix takes the same route as the existing BambuStudio slicer-preset storage: persist color_name on spool.extra.bambu_color_name as a JSON-encoded string, register the extra field via ensure_extra_field before write (Spoolman 400s on unknown extra keys), and read it back in _map_spoolman_spool with priority spool.extra.bambu_color_name → filament.color_name (forward-compat for any future Spoolman release that adds it) → subtype synth. Also dropped the now-dead color_name passing through find_or_create_filament and create_filament — Spoolman would discard it anyway and keeping the dead pipe risked the same confusion the next time someone reads this code. The previous "match by name then patch color_name" loop is gone; what survives is the name-match resilience added earlier this turn so an AMS-sync-created filament named "Glow" still matches the user-driven edit's composed "PLA Glow", which prevents the duplicate-filament trail. The frontend form's color_name_is_synthesized handling is unchanged — that part already worked. Tests rewritten across the three affected suites (test_spoolman_inventory_methods.py, test_spoolman_inventory_helpers.py, test_spoolman_inventory_api.py) to pin the new contract: filament patch never carries color_name, route writes to bambu_color_name extra, read prefers extra over filament-field over synth. Verified end-to-end against the live Spoolman instance at the reporter's setup (PATCH /filament with color_name → field absent from response; PATCH /spool with extra.bambu_color_name → field present in response).
  • Add Smart Plug (HA mode) — search dropdown let users pick entities the schema would reject, surfacing as a cryptic regex error on Save (#1388) — Reporter MartinNYHC opened the Add Smart Plug dialog, typed a search prefix matching a multi-entity HA device (a Shelly-style outlet exposing one switch.* and several sensor.* / binary_sensor.* siblings under the same friendly-name prefix), clicked one of the entities, filled in the optional power/energy sensors, and clicked Save. The backend returned 422 with the raw Pydantic message String should match pattern '^(switch|light|input_boolean|script)\.[a-z0-9_]+$'. After the dropdown closed and the search cleared, the entity-list refetch (with no search param) returned the default-domain-filtered list — which didn't include the user's pick — so selectedEntity = haEntities.find(...) was undefined, the field rendered as visually empty (placeholder shown), but haEntityId still held the bad value the user had selected. Root cause was at backend/app/services/homeassistant.py::list_entities: when a search query was present, the function bypassed the domain filter entirely and returned matches across every HA domain — including ones the SmartPlugBase.ha_entity_id regex at backend/app/schemas/smart_plug.py:17 could never accept. Offering a clickable choice the user can't save is broken UX; the fact that the error message then said switch|light|input_boolean|script made it look like a schema problem rather than a search-permissiveness problem. Fix: the allowed-domains filter ({"switch", "light", "input_boolean", "script"}, kept in sync with the schema regex) now always runs, and search composes on top of it as an additional substring match against entity_id or friendly_name. Whitespace-only search strings are treated as no search. Verified the smart-plug code path is unchanged between 0.2.4 and 0.2.4.1 — this bug was latent since the script-domain commit in February 2026 and was only noticed now because the reporter hadn't reopened the modal in months. 5 new regression tests in backend/tests/unit/services/test_homeassistant_list_entities.py cover the no-search baseline, the search-still-domain-filters case (the actual #1388 reproduction), the entity_id-or-friendly_name substring match, case-insensitivity, and the whitespace-only edge case.
  • H2S with no AMS could not start a print — firmware rejected the dispatch with 07FF_8012 "Failed to get AMS mapping table" (#1386) — Reporter krootstijn (H2S + no AMS) clicked Print and got an immediate firmware error. Two stacked misclassifications had quietly added H2S to the dual-nozzle code paths over time. The first was in start_print_job at backend/app/services/bambu_mqtt.py:3168 — the is_h2d flag was set true for ("H2D", "H2D PRO", "H2DPRO", "H2C", "H2S", "X2D"). That single flag controlled both the firmware bool→int format (legitimately needed for the whole H-family) and the external-spool routing branch (ext_ams_id = tray_id if is_h2d else 255) which is only correct for actual dual-nozzle printers. With no AMS, the external-spool sentinel is 254; the dual-nozzle branch wrote ams_id=254 into ams_mapping2 instead of the canonical 255. The exact failure shape (07FF_8012) is even called out in the comment six lines above the bad line — H2S was getting routed straight into the path the comment warned against. The second misclassification was the use_ams=False fallback at bambu_mqtt.py:3213 (if ams_mapping and use_ams and not is_h2d) — meant to skip the safety drop on dual-nozzle printers where use_ams controls nozzle routing — also skipped H2S, so the firmware never got a chance to fall back to external-spool mode. A third site at bambu_mqtt.py:3987 (and its sibling at backend/app/api/routes/kprofiles.py:119) classified dual-nozzle by serial prefix ("094", "20P9", "31B8B"), which is wrong because H2S shares prefix 094 with H2D. Fix splits the conflated flag into two: is_h_family (firmware-format gate, includes H2S) and is_dual_nozzle (routing/use_ams gate, excludes H2S; prefers the runtime _is_dual_nozzle flag set from device.extruder.info and falls back to model name for the brief window right after connect). The K-profile delete and the edit route now use the same two-source check instead of the serial prefix. Empirically verified across 9+ stored H2S support bundles (nozzle_count: 1 in every one) and the reporter's bug log (07FF_8012 immediately after dispatch). Four new regression tests: test_h2s_single_external_spool_uses_main_id, test_h2s_no_ams_forces_use_ams_false, test_h2s_keeps_integer_format_for_calibration_fields, plus a new test_h2s_uses_single_nozzle_format in the K-profile suite. The K-profile detection tests were also updated to set both model name and runtime flag rather than relying on serial prefix, since the source-of-truth has shifted.

Don't miss a new bambuddy release

NewReleases is sending notifications on new releases.