github maziggy/bambuddy v0.2.3b3-daily.20260411
Daily Beta Build v0.2.3b3-daily.20260411

pre-release13 hours ago

Note

This is a daily beta build (2026-04-11). It contains the latest fixes and improvements but may have undiscovered issues.

Docker users: Update by pulling the new image:

docker pull ghcr.io/maziggy/bambuddy:daily

or

docker pull maziggy/bambuddy:daily


**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.

Improved

  • AMS Drying Support for P2S — Remote AMS drying and queue auto-drying now work on P2S printers with firmware 01.02.00.00 or later. Previously P2S was hard-blocked from the drying feature.

New Features

  • SpoolBuddy Device Management Tab — Settings → SpoolBuddy now lists every registered SpoolBuddy device with live connection status, system details (firmware, IP, CPU temperature, memory, disk, OS, daemon and system uptime), hardware health flags (NFC / scale OK), and an Unregister button gated by a confirm modal. Previously, when a daemon crash caused SpoolBuddy to register itself twice, the kiosk UI silently used only the first device and there was no UI path to delete the orphaned duplicate — administrators had to delete the row directly in the database. A new DELETE /spoolbuddy/devices/{device_id} endpoint (gated by inventory:delete) handles the removal and broadcasts a spoolbuddy_unregistered websocket event so other tabs refresh immediately. A yellow warning banner appears when more than one device is registered to flag likely crash-duplicates. If an online device is accidentally unregistered, it will re-register itself on its next heartbeat. The Settings tab header also shows a device-count badge and a green/gray bullet indicating whether at least one registered device is online. Fully localized in English, German, and Japanese.
  • Print Files Directly from Project View (#930) — The project detail page now lists the printable files from every linked library folder inline, with Play (Print Now) and CalendarPlus (Add to Queue) action buttons on each sliced file (.gcode and .gcode.3mf). No more round-tripping through File Manager to reprint project files. Prints triggered from the project view are automatically associated with the originating project, so the resulting archive shows up in that project's history without any manual assignment. Backend adds a project_id query parameter to GET /library/files that returns all files across linked folders in a single query (replacing the prior one-request-per-folder pattern) and validates project_id on both the direct-print and queue paths so a stale ID yields a 404 instead of a FK-constraint 500. Fully localized across all 7 UI languages. Thanks to legend813 for the contribution.
  • Printers Page Search and Filters (#852) — The Printers page now has a live search bar and two filter dropdowns (status and location) to make finding specific printers in large setups easier, especially on mobile where Ctrl+F is impractical. Search matches printer name, model, location, and serial number (case-insensitive, whitespace-trimmed) and has a clear button. The status filter covers All / Printing / Paused / Idle / Finished / Error / Offline and is reactive to WebSocket status updates via a React Query cache subscription — so a print finishing while "Printing" is selected immediately removes the printer from the filtered list. The location filter is only shown when at least one printer has a location configured. All three filters are combinable; the controls are hidden when no printers are configured yet; and an empty-state message appears when no printer matches the current search/filters. Fully localized across all 7 UI languages. Thanks to legend813 for the contribution.
  • LDAP Default Fallback Group — Settings → Authentication → LDAP → Advanced now has a "Default group" selector. When an LDAP user authenticates but is not listed in any mapped LDAP group, they are automatically assigned to this fallback group instead of being left without permissions. Previously such users could log in successfully but landed on empty pages because every permission check failed. Leave the setting empty to preserve the old behavior. A warning is logged each time the fallback is applied so administrators can spot missing group assignments.

Changed

  • SpoolBuddy Kiosk LCD Now Powers Off on Idle (#937) — The SpoolBuddy kiosk's "screen blank timeout" setting previously only painted a black CSS overlay over the browser window; the HDMI panel's backlight stayed on indefinitely, wasting power and letting OLED/LED panels burn in. The blanking path is now moved down to the OS layer: the install script installs swayidle and wlopm, and labwc's autostart launches a new watchdog (spoolbuddy/install/spoolbuddy-idle.sh) that queries the backend once on boot for the device's display_blank_timeout and hands it to swayidle, which powers HDMI off via wlopm --off HDMI-A-1 after the configured idle period and powers it back on via wlopm --on when labwc delivers any input event (touch, keypress). The redundant CSS overlay and its pointer/keyboard listeners have been removed from SpoolBuddyLayout — one source of truth now. Screen blanking is opt-in: display_blank_timeout=0 (the default) skips launching swayidle entirely and the display stays on forever, preserving current behavior for users who didn't pick a timeout. The default for users who newly enable blanking is 300 seconds. Changes made to the timeout in SpoolBuddy Settings → Display take effect on the next kiosk restart — tap Quick Menu → Restart Browser to apply without a full reboot. A new GET /api/v1/spoolbuddy/devices/{device_id}/display endpoint (gated on inventory:update, same as the existing PUT and heartbeat endpoints) is what the kiosk-side watchdog reads, so no new permissions are required on the device's API key. The watchdog also writes a full startup trace (env vars, resolved timeout, the exact swayidle command it execs) to ~/.cache/spoolbuddy-idle.log so any future breakage on a different kiosk setup is trivially diagnosable, and auto-detects WAYLAND_DISPLAY from XDG_RUNTIME_DIR with a short retry loop in case labwc hasn't finished exporting its env by the time autostart runs. Thanks to TravisWilder for reporting.

Fixed

  • H2C Nozzle Rack Slot Numbering Off When Slot 1's Nozzle Is Mounted (#943) — The H2C nozzle rack card on the Printers page rendered every rack slot shifted by one position whenever the lowest-numbered slot (rack ID 16, displayed as "slot 1") had its nozzle currently picked up into a hotend. In that state the printer firmware omits the mounted slot's ID from device.nozzle.info entirely instead of sending an empty placeholder, so the rack arrived with 5 entries (IDs 17..21) plus the 2 L/R hotends. The frontend was computing its rack base ID via min(present_ids), which then became 17 instead of the fixed 16, and every remaining nozzle was rendered one position to the left — the nozzle physically in slot 2 appeared as "slot 1", slot 3 appeared as "slot 2", and so on, with the single empty placeholder falling off the right end as a phantom "slot 6" that should have been the actual empty "slot 1". The rack base is now hardcoded to 16 to match the fixed H2C rack ID layout (already encoded in the test_h2c_nozzle_rack_populated_with_8_entries backend test), so the empty slot stays anchored to its physical position regardless of which nozzle is currently in use. A frontend regression test exercises exactly this case (ID 16 missing, remaining slots in order) and asserts the rendered slot row reads [—, 0.2, 0.6, 0.8, 1.0, 1.2]. Thanks to netscout2001 for reporting.
  • Energy Snapshot Capture Crashes on PostgreSQL — With an external PostgreSQL database configured, the hourly smart-plug energy snapshot loop (introduced with the #941 fix) logged asyncpg.DataError: invalid input for query argument $2: ... can't subtract offset-naive and offset-aware datetimes every hour and failed to persist any snapshots, so date-filtered energy statistics in total-consumption mode stayed empty on Postgres installs. The engine already had a before_cursor_execute hook that strips tzinfo from bound datetime parameters before they reach asyncpg (the smart_plug_energy_snapshots.recorded_at column is TIMESTAMP WITHOUT TIME ZONE to match the rest of the schema), but the hook only stripped datetimes one level deep — when SQLAlchemy's insertmanyvalues feature batched multiple snapshot rows into a single INSERT ... SELECT FROM (VALUES ...) statement, parameters arrived as nested containers (lists of tuples, or a list inside an outer container) and the inner datetimes slipped through untouched. The hook now recursively walks any nesting of dict/list/tuple and strips tzinfo at any depth, so every parameter shape SQLAlchemy may use is handled. SQLite installs were never affected (SQLite ignores tzinfo entirely).
  • Wrong Filament Color Name Shown on Printer Tab AMS Popup (#857) — PLA Translucent Cherry Pink (and other colors outside a small hand-maintained list) appeared as "Scarlet Red" on the Printer tab AMS slot popup, and was also auto-provisioned into the inventory under the wrong name on the first RFID read. Root cause: both the backend spool auto-provisioner and the frontend AMS popup resolved color names by looking up the Bambu tray_id_name code (e.g. A17-R1) in a hardcoded table, and when the exact code wasn't listed they fell back to a suffix-only lookup (R1 → Scarlet Red). The suffix half of that code is not globally unique across material families — A17-R1 is PLA Translucent Cherry Pink, while A01-R1 is PLA Matte Scarlet Red — so the fallback was structurally guaranteed to produce wrong names for any color the hand-maintained list didn't happen to cover. The resolver has been rewritten to use the existing color_catalog table (seeded from catalog_defaults.py plus the FilamentColors.xyz sync) as the single source of truth. Backend lookup is now by hex color against the catalog; the frontend fetches a compact {hex: name} map once per session via a new GET /api/inventory/colors/map endpoint (available to any authenticated user, not gated on inventory:read), stores it in a ColorCatalogProvider context, and uses it for all getColorName() calls. The hardcoded tables in backend/app/core/bambu_colors.py, frontend/src/utils/colors.ts, and frontend/src/pages/PrintersPage.tsx have been removed entirely. Existing spools that were auto-created with a wrong name before this fix need to be renamed manually — the fix only affects new auto-provisioning and live display. Thanks to lightmaster for reporting.
  • LDAP Auto-Provisioning Fails on Upgraded SQLite Installs (#794) — First LDAP login on an upgraded SQLite install hit sqlite3.IntegrityError: NOT NULL constraint failed: users.password_hash and fell through to a 500 response, because the users table on disk had been created before LDAP support landed with password_hash VARCHAR(255) NOT NULL. The model was already nullable=True and the migration to drop the constraint existed, but only ran on PostgreSQL — SQLite was skipped entirely because it has no ALTER COLUMN ... DROP NOT NULL. The migration now patches sqlite_master directly via PRAGMA writable_schema and bumps PRAGMA schema_version so the current connection reloads the table definition without requiring a restart. Fresh installs were never affected (they go through Base.metadata.create_all which uses the current nullable model). Thanks to DylanBrass for reporting.
  • Energy Statistics Empty for Week/Month/Day in Total Consumption Mode (#941) — With "Total consumption" selected as the energy tracking mode, the Statistics page showed the correct kWh total for All Time but zero for every time-filtered range (Today, This Week, This Month, …). The backend fell back to summing per-print archive energy whenever a date filter was active, but in total-consumption mode the per-print column was often empty for two reasons: (1) the starting-kWh value was held in an in-memory dict (_print_energy_start) that was lost on any backend restart mid-print, so prints that spanned a restart never got an energy delta computed; (2) historical prints from before a smart plug was added had no value at all. The fix replaces the in-memory dict with a persisted energy_start_kwh column on the archive row, and adds an hourly snapshot loop (smart_plug_energy_snapshots table) that captures each plug's lifetime counter. The /archives/stats endpoint now computes date-range totals via per-plug (last-in-range − baseline) deltas from those snapshots, clamping counter resets to zero. A warming-up flag is returned (and rendered as a tooltip next to the Energy stats on StatsPage) when the query runs on incomplete snapshot history — e.g. right after upgrade, before the hourly loop has built up a baseline before the selected range — so the "low" values during the first hours after upgrading are explained in-product rather than misread as a bug. Fully localized across all 7 UI languages. Per-print energy tracking is now restart-resilient in all modes as a side-effect. Thanks to Mike (TheMadMike23) for reporting.
  • Virtual Printer "Synchronizing device information" Times Out in Orca (#927) — OrcaSlicer's "Send job" flow sat on "Synchronizing device information…" until it gave up, even though the FTP upload itself worked when the user clicked "Send job anyway". The virtual printer's MQTT server gated all incoming command handling on f"device/{self.serial}/request" in topic — if the slicer's cached serial for the VP didn't exactly equal the VP's computed self.serial (which depends on model prefix + per-VP serial_suffix), every get_version, pushall, and project_file publish was silently dropped. Nothing was logged past the initial "MQTT publish to …" line, so the slicer never received a push_status or get_version response on its subscribed device/{serial}/report topic and hit its sync timeout. Status pushes, version responses, and project_file acknowledgments were also being published on device/{self.serial}/report, so even when the incoming check happened to pass, replies targeted a topic the slicer wasn't listening on if its serial had drifted. Both directions are now serial-adaptive: the handler accepts any authenticated publish on a device/*/request topic, extracts the serial the slicer is actually using from the topic, stores it per-connection, and uses it for every outgoing status report, version response, print acknowledgment, and periodic push so responses always land on the topic the slicer subscribed to. The client's serial is cleared when the connection closes and when the server stops. Regression tests cover the mismatched-serial publish path, the non-request-topic rejection path, the pushall→status_report routing, and the client-serial lifecycle.
  • External Sidebar Link Icon Not Showing (#878) — Custom icons uploaded for external sidebar links rendered correctly in the edit dialog but were missing from the sidebar itself, and opening the icon URL directly returned {"detail":"Valid camera stream token required..."}. The sidebar <img> tag in Layout.tsx used a raw /api/v1/external-links/{id}/icon URL, but that endpoint is protected by a query-string stream token (the same mechanism used for camera streams and archive thumbnails, because <img> tags cannot send Authorization headers). The edit dialog already routed through api.getExternalLinkIconUrl(), which wraps the URL via withStreamToken(); the sidebar now does the same, so icons appear when auth is enabled.
  • Shortest Job First Toggle Disappears After Clicking (#879) — The SJF toggle badge on the queue page was rendered inside the Pending Queue section header, which is only shown when there is at least one pending item and the list view is active. Clicking the toggle often coincided with the scheduler starting the only pending print, at which point the Pending section unmounted and the toggle vanished along with it — making it look like the button had disappeared after clicking. The toggle has been moved to the top of the queue page, next to the list/timeline view switcher, so it stays reachable regardless of pending-item count, active filters, or the selected view mode.
  • SpoolBuddy Update Fails in Docker with "no user exists for uid 1000/1001" — The SpoolBuddy remote-update flow shelled out to the OpenSSH ssh-keygen and ssh binaries for keypair creation and command execution. Both binaries call getpwuid(getuid()) at startup and abort with No user exists for uid <N> when the container runs under an arbitrary PUID that is not listed in /etc/passwd (the stock python:3.13-slim image only has an entry for root, so running with user: "1000:1000", "1001:1001", or any non-root user tripped the same error). The entire SpoolBuddy update path is now subprocess-free: keypairs are generated in-process via the cryptography library (already a dependency), SSH commands run through the pure-Python asyncssh client, and git-branch detection reads .git/HEAD directly instead of shelling out to git. asyncssh also calls getpass.getuser() for local ~/.ssh/config host matching, which hit the same passwd lookup failure; the Docker image now sets LOGNAME=bambuddy, USER=bambuddy, and HOME=/app so getpass.getuser() resolves via env vars before touching the passwd database, and asyncssh.connect() is called with config=[] so it does not attempt to load ~/.ssh/config at all. Branch detection also now looks for .git/HEAD in the application root rather than settings.base_dir — in Docker the data directory is a separate volume (DATA_DIR=/app/data) that never contains .git. Finally, the Docker build now bakes .git/HEAD into the image (.dockerignore allows this single 20-byte file through the context filter) so the production image knows which branch it was built from; previously the .git directory was excluded from the build context entirely, leaving the container with no git metadata and causing the SpoolBuddy update flow to always pull main on the remote device regardless of which branch Bambuddy itself was built from. Native installs behave identically — they already worked because the running user was always in /etc/passwd and .git/HEAD was readable from the project root. Regression tests assert that neither keypair creation nor command execution spawns any subprocess, and that branch detection reads from the application root even when a decoy .git sits inside the data dir.
  • Camera Stream "6 of 5" Reconnect Counter + ffmpeg Log Flood (#925) — Two bugs surfaced while investigating camera reconnect behaviour. First, the camera page briefly displayed "Reconnecting attempt 6 of 5" before giving up, because the attempt counter could be incremented to the maximum while the reconnect banner was still rendering. The displayed value is now clamped to the configured maximum. Second, every failed ffmpeg spawn logged the full ~20-line ffmpeg version/configuration banner, producing hundreds of lines of noise per failed camera click (one reported click produced 555 log lines across 30 retries). A new stderr summarizer strips the ffmpeg banner before logging so only the actual error lines remain. The underlying "camera service stops accepting new connections after prolonged uptime" behaviour in the X1C firmware is still under investigation.
  • LDAP POSIX Primary Group Ignored — LDAP authentication only looked at groups that listed the user explicitly via memberUid (supplementary group membership). A user's POSIX primary group — referenced by the gidNumber attribute on the user object and matching the gidNumber on a posixGroup — was ignored entirely, so users whose role came from their primary group landed without the expected permissions. The authenticator now also searches for posixGroup entries whose gidNumber matches the user's primary gidNumber, and dedupes DNs case-insensitively before resolving the group mapping (LDAP DNs are case-insensitive by spec).
  • Support Bundle Leaks Virtual Printer IP Address — The debug support bundle included the virtual_printer_remote_interface_ip setting value unmasked in support-info.json. The setting key didn't match any of the existing sensitive-key filters, so the raw IP address was included in the bundle. Added _ip to the sensitive key filter so IP address settings are excluded from support bundles. Log file content was already covered by the existing IPv4 regex redaction.
  • "Build Plate Cleared" Button Unclickable After Second Print (#912) — After completing the first queued print and confirming the plate was cleared, the "Build plate cleared — ready for next print" button became unresponsive after the second print finished. The React Query mutation's isSuccess state persisted from the first plate-clear confirmation, causing the component to render the static "Plate Ready" confirmation instead of the clickable button. The mutation state is now reset when the printer leaves the FINISH/FAILED state, so the button works correctly on every print cycle.
  • Spoolman Location Not Cleared When Spool Removed from AMS (#921) — When Spoolman auto-sync was enabled and a spool was removed from an AMS slot, its location in Spoolman was never cleared, causing "double-booked" slots where multiple spools shared the same location. The auto-sync callback set locations for newly inserted spools but skipped the cleanup step that clears stale locations. The location clearing logic now runs after every auto-sync cycle. Also fixed the single-printer manual sync endpoint which didn't track synced spool IDs, risking incorrect location clearing for location-matched (non-RFID) spools.

Don't miss a new bambuddy release

NewReleases is sending notifications on new releases.