github maziggy/bambuddy v0.2.4b3-daily.20260507
Daily Beta Build v0.2.4b3-daily.20260507

pre-release8 hours ago

Note

This is a daily beta build (2026-05-07). It contains the latest fixes and improvements but may have undiscovered issues.

Docker users: Update by pulling the new image:

docker pull ghcr.io/maziggy/bambuddy:daily

or

docker pull maziggy/bambuddy:daily


**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.

Added

  • Slicer Bundle (.bbscfg) import — pick presets from a stored bundle instead of resolving cloud/local/standard PresetRefs every slice — Closes the long tail of preset-resolution corner cases (cloud presets behind login, "from User" sentinel handling, the # -prefix clone trick, dangling inherits on renamed parents, etc.) by letting users upload a BambuStudio "Printer Preset Bundle" (.bbscfg) once per printer and pick from it for every subsequent slice. Service layer (backend/app/services/slicer_api.py): BundleSummary / BundleNotFoundError types, import_bundle / list_bundles / get_bundle / delete_bundle methods, slice_with_bundle which posts /slice with bundle id + per-category preset names instead of the JSON triplet. Routes (/api/v1/slicer/bundles, all gated on Permission.LIBRARY_UPLOAD): POST / GET / GET :id / DELETE :id. All routes proxy via _resolve_slicer_api_url so they follow the user's preferred_slicer setting (bambu_studio vs orcaslicer). Status-code mapping treats sidecar 4xx as 400, BundleNotFoundError as 404, sidecar unreachable as 503, and sidecar 5xx as 502. Preview-slice (backend/app/services/slice_preview.py::get_preview_filaments) picks up optional bundle_id + printer_name + process_name + filament_names params and routes through slice_with_bundle when set; the cache key picks up a bundle-context fingerprint so different bundle picks on the same file occupy distinct entries — gram numbers in the preview now match what the real print will produce instead of being derived from the file's embedded process settings (which can drift from the triplet the actual slice would use). The library.py and archives.py /filament-requirements routes forward the new params. Dispatch (SliceRequest.bundle: SliceBundleSpec): when set, _run_slicer_with_fallback skips resolve_preset_ref and calls slice_with_bundle; the validator skips the preset-required check so bundle-only requests validate. 3MF + bundle CLI 5xx still falls back to the embedded-settings slice path (used_embedded_settings=True surfaces in the response), and sidecar 404 (unknown bundle / preset name) maps to 400. Frontend SliceModal Bundle tier: new "Slicer bundle" picker at the top of the modal, rendered only when at least one bundle is imported (GET /slicer/bundles non-empty). Selecting a bundle replaces cloud / local / standard preset dropdowns with bundle-scoped pickers (process + per-slot filament names from the bundle) — printer is implicit (each .bbscfg has exactly one). "None" leaves the modal on the original preset-triplet path. Submit routes through SliceRequest.bundle so the backend skips PresetRef resolution and asks the sidecar to materialise the JSON triplet from the stored bundle by name. Frontend types: SliceBundleSpec + bundle?: SliceBundleSpec on SliceRequest; getLibraryFileFilamentRequirements / getArchiveFilamentRequirements accept an optional 4th-arg bundle context object. The orca-slicer-api fork's bundle endpoints (shipped on bambuddy/bundle-import) are the server side of this — see the slicer-api sidecar docker-compose for the matching versions.

Fixed

  • Filament usage double-counted when AMS auto-falls-back to a same-material spool (#957) — When one spool ran out mid-print and the AMS transparently switched to a sibling slot loaded with the same material, the usage tracker credited the originally-mapped spool with the full 3MF estimate AND added the fallback spool's remain%-delta on top — so a 78 g print could show as 78 g + 60 g = 138 g consumed across the two spools, leaving the empty spool's recorded weight beyond its label weight (the symptom the original report flagged on a 1209 g spool reading "1188.30 g used" while the new spool only got a 30 g credit). Two interacting bugs: (1) the tray-change recorder in bambu_mqtt.py gated on state in ("RUNNING", "PAUSE") literal strings, and P2S firmware briefly transitions out of RUNNING during the AMS swap, so the switch was never appended to tray_change_log; (2) the usage-tracker splitting branch in usage_tracker.py was gated on not slot_to_tray, so even when the tray-change log was populated the splitting code only ran for prints where the slicer's mapping had not been captured — i.e. never on the actual fallback case. Fix: the bambu_mqtt.py gate now keys on the print-lifecycle flags (_was_running and not _completion_triggered) so any tray change between print start and completion is captured regardless of the momentary gcode_state string. The usage_tracker.py gate is split so tray_change_log evidence with > 1 entries always takes over from slot_to_tray, treating the per-segment per-layer gcode usage as the source of truth when the printer actually fed from multiple trays. Path 2 (AMS remain%-delta fallback) then naturally skips both trays because they're already in handled_trays after splitting, eliminating the double-credit. Tests: new test_tray_change_recorded_during_intermediate_state and test_tray_change_not_recorded_after_completion in test_bambu_mqtt.py exercising the new gate; new test_tray_switch_overrides_print_cmd_mapping in test_usage_tracker.py pinning that with ams_mapping=[0] set and tray_change_log=[(0,0),(1,30)] the splitter produces two segments summing to the 3MF estimate (no double-count) and adds both (0,0) and (0,1) to handled_trays.
  • 3D Preview returned {"detail":"Not Found"} in Docker installs (#1218) — The embedded GCode viewer's static assets (gcode_viewer/) were not copied into the production Docker image, so clicking "3D Preview" on any archive loaded an iframe at /gcode-viewer/?archive=<id> that returned a bare FastAPI 404 — Firefox / Chrome rendered the JSON response inside the iframe area while the outer Bambuddy layout looked normal, masking the failure unless the user actually inspected the iframe. The Vite production build doesn't stage gcode_viewer/ into static/ either (the dev server serves it via a configureServer middleware that's dev-only), and the only integration test for the route accepted 404 as a valid outcome ("assert response.status_code in (200, 404)") so CI never caught the missing files. Affected every Docker build since the embedded viewer landed in 0.2.4b1 (commit 3adce435, 2026-04-22). Fix: Dockerfile now copies the gcode_viewer/ directory alongside the React build output. Defence in depth: backend/app/main.py logs an ERROR at startup when _gcode_viewer_dir / "index.html" is missing so future packaging gaps surface in docker logs and the support bundle instead of as silent runtime 404s. Test guard: backend/tests/integration/test_gcode_viewer.py adds test_gcode_viewer_index_served_when_assets_present which skips when the directory is intentionally absent (unit-test environments) but asserts 200 OK + a non-empty HTML body when the assets do exist on disk — so a future broken COPY fails CI loudly rather than continuing to ship a broken image.
  • Slice button no longer enabled before the preview slice resolves — Until the preview slice (or embedded-metadata read for already-sliced 3MFs) returned the per-plate filament list, the SliceModal rendered a synthetic single-slot fallback so the auto-pick had something to bind against. That made the Slice button enabled the moment the modal opened, even before the slicer had told us which AMS slots the plate actually consumes — clicking would dispatch against opaque defaults and the real-life print would either pick the wrong filament or fail with a slot-mismatch error after the fact. Adds filamentReqsQuery.isSuccess to the isReady chain so the button stays disabled while the preview slice is in flight (or before the backend's /filament-requirements call settles for sliced files) and flips to enabled the moment the real slot list lands and auto-pick fills it.
  • New AMS RFID rolls auto-named to the wrong colour when the hex is shared across material variants (#1227) — Inserting an Ivory White (PLA Matte) roll always created a spool named "Jade White" because the colour-catalog lookup in create_spool_from_tray filtered by manufacturer + hex only, with no ORDER BY. Three Bambu Lab catalog rows share #FFFFFF — Jade White (PLA Basic), Ivory White (PLA Matte), White (PLA Silk) — and SQLite returned them in rowid order, so the first-inserted entry (Jade White) won every time regardless of the actual material the AMS reported. Same class of bug bites any other shared-hex pair across PLA Basic / Matte / Silk; the whites were just the most visible. Fix: spool_tag_matcher.py::create_spool_from_tray now filters the catalog by tray_sub_brands too — the printer-reported material variant ("PLA Matte" / "PLA Basic" / "PLA Silk") matches the catalog's material column directly. The query also gets an explicit ORDER BY id so the fallback path (when tray_sub_brands is empty — third-party spools / OpenTag tags) is deterministic across SQLite + PostgreSQL instead of DB-implementation-defined. The catalog lookup uses the raw tray_sub_brands value (before the gradient/dual/tri-color subtype upgrade at lines 73-87) because the catalog stores "PLA Basic" for gradient rolls too — the upgraded subtype lives on the spool, not the catalog row. Note for affected users: spools already in the database under the wrong colour name (e.g. four Ivory White rolls labelled "Jade White") don't auto-correct on next AMS read — the matcher only fires when creating a new spool from RFID. Existing rows need a manual rename in Inventory after upgrading. Tests: 4 new in test_spool_tag_matcher.pytest_ivory_white_pla_matte_resolves_to_ivory_not_jade (the #1227 regression pin), test_pla_silk_white_resolves_to_white_not_jade (the third collision), test_jade_white_pla_basic_still_resolves_correctly (happy-path guard with all three #FFFFFF entries seeded), and test_unknown_material_falls_back_to_hex_only_lookup (third-party / empty tray_sub_brands path stays deterministic via ORDER BY).
  • Backups to Gitea / Forgejo failed with "Failed to create tree" on empty repos and "list indices must be integers or slices, not str" on populated repos (#1224, #1225) — Two interacting bugs in the Gitea/Forgejo backend, both inherited from GitHubBackend because PR #1160's class docstring assumed Gitea's Git Data API was fully GitHub-compatible. (1) List-shaped ref response: GET /api/v1/repos/{owner}/{repo}/git/refs/heads/{branch} returns a list of matching refs on Gitea/Forgejo even when only one matches ([{"ref": ..., "object": {"sha": ...}}]), whereas GitHub returns a single object. The inherited push_files and _create_branch_and_push did ref_response.json()["object"]["sha"] and crashed with list indices must be integers or slices, not str — surfacing as the failure at the top of any push against a populated Gitea repo (#1225's symptom, and #1224's symptom once the user committed any file before the first backup). (2) Empty-repo writes refused: GitHub's Git Data API accepts POST /git/blobs against a brand-new empty repo and creates the initial commit + branch implicitly. Gitea refuses every blob/tree/commit POST with 404 until the underlying git repo has at least one commit — so the inherited _create_initial_commit (which posts blobs → tree → commit → ref in that order) silently failed: every blob POST returned 404, tree_items ended up empty, and the next tree POST also returned 404 ("Failed to create tree" — #1224's symptom on a freshly-created empty Gitea repo). Fix: GiteaBackend now overrides push_files, _create_branch_and_push, and _create_initial_commit directly instead of inheriting them. The Git Data API path uses a _ref_sha() helper that accepts both list and dict shapes; the empty-repo bootstrap route uses Gitea's Contents API (POST /api/v1/repos/{owner}/{repo}/contents with a files array, branch=<target>, new_branch=<target>) which seeds the initial commit + branch in a single transaction — Contents API is documented to work on empty repos because it goes through Gitea's higher-level repo-init path. GitHubBackend is untouched — the GitHub backup path is proven working, the fix is fully isolated to the Gitea side. ForgejoBackend(GiteaBackend) inherits both fixes automatically; tests pin that. Tests: 10 new tests in test_git_providers.pyTestGiteaBackendListShapeRefResponse (4 tests: _ref_sha accepts list/dict/empty-list, plus full push_files happy paths against list-shaped branch ref and list-shaped default-branch ref), TestGiteaBackendEmptyRepoInitialCommit (4 tests: empty repo routes through Contents API exclusively with no blob/tree/commit/ref Git Data API calls, payload shape verified field-by-field against Gitea's documented schema, error truncation works, empty file dict returns skipped without firing a useless API call), and TestForgejoInheritsGiteaFixes (2 tests: list-shape and empty-repo paths both work via inheritance). Existing 6 TestGiteaBackendPushFiles tests still pass since _ref_sha accepts dict-shaped responses too. Total: 78 tests pass across the backup unit + integration suites; ruff clean.
  • Docker data-volume ownership normalised at startup via gosu entrypoint (#1211) — Two long-standing failure modes have been biting Docker users repeatedly: (1) Docker named volumes are created by the daemon as root:root, and the previous chmod 777 /app/data Dockerfile workaround only covered the named-volume root — so subdirs Bambuddy creates at runtime (virtual_printer/uploads, virtual_printer/certs, etc.) inherited wrong ownership when the container ran as 1000:1000. (2) The shipped docker-compose.yml ships ./virtual_printer:/app/data/virtual_printer uncommented, and dockerd creates a missing bind-mount source on the host as root before the container starts — leaving the host directory unwritable by uid 1000 inside the container even though the named volume above it had the chmod-777 workaround. Symptom either way: [Errno 13] Permission denied: '/app/data/virtual_printer/uploads', no virtual printer ever starts, "VP doesn't work" support reports follow. Replaces the chmod-777 hack with a proper entrypoint: deploy/docker-entrypoint.sh runs as root, chowns /app/data and /app/logs (and /app/data/virtual_printer when bind-mounted) to PUID:PGID, then drops to that uid via gosu before exec'ing the app. The chown is gated behind a top-level ownership check so subsequent restarts skip the recursive traversal — no multi-second startup penalty on multi-GB archive directories. A sentinel .bambuddy file in each data path prevents Docker from re-syncing image directory metadata on every mount (otherwise empty volumes have their ownership reverted from the image on each restart, defeating the idempotency). When the container is started with an explicit user: directive or --user flag the entrypoint detects it isn't root and falls through to direct exec — preserving compatibility for users who pin a specific uid. Compose template changes: removes user: "${PUID:-1000}:${PGID:-1000}" (the entrypoint owns privilege drop now), adds PUID / PGID env vars with the same defaults, and comments out the ./virtual_printer:/app/data/virtual_printer bind mount by default with explicit "only needed if you also run a native install of Bambuddy on the same host and want both to share the VP CA cert" guidance. The entrypoint chowns the host-side dir through the bind mount the first time it sees wrong ownership, so existing uncommented installs continue to work and #1211 specifically gets fixed.
  • Label picker modal clipped the 4th template option and Cancel button on short viewports (#1230, reported by elit3ge) — Clicking "Print labels" from Inventory opened the picker with only 3 of the 4 templates visible (Avery 5160 was half-cut at the bottom) and no Cancel button reachable, with no way to scroll to them. Surfaced reliably on Windows 11 + Brave at 1080p with browser chrome / DPI scaling shrinking the effective viewport, but the layout bug hits anywhere the modal's max-h-[90vh] lands below ~770 px. Cause: LabelTemplatePickerModal.tsx uses a flex column with overflow-hidden on the outer modal, the spool list as the flex-1 shrinkable child, and the templates section + footer as fixed siblings below it. The spool list had min-h-[160px], which combined with the default min-height: auto for flex items meant the spool list couldn't yield space when the modal was tight — the templates and footer overflowed the modal's bottom edge and got clipped. Fix: min-h-[160px]min-h-0 on the spool list scroller, which both removes the fixed floor and overrides the implicit min-height: auto so flex shrinking actually works; the spool list now yields height to keep all four templates and the Cancel button visible on constrained viewports. On larger viewports the behaviour is identical (flex-1 still grows to fill). Pre-existing on dev since 0.2.4b2 (commit 864e5c99, the original PR #809 that introduced the modal); not a regression from the spoolman-inventory rebase. Test: new regression test in LabelTemplatePickerModal.test.tsx asserts all four template names + the Cancel button render in the DOM, and pins the structural fix by checking the spool list scroller has min-h-0 and no min-h-[…] literal — so a future refactor that re-introduces a fixed floor on that element fails CI.

Don't miss a new bambuddy release

NewReleases is sending notifications on new releases.