Summary
Release v8.12.0. Bundles four changes:
- Mixed-compose
g:handling — Compose apps with a mix ofg:(master/slave) and non-g:components no longer leave their non-g:siblings stuck stopped on boot or blocked from auto-restart at runtime.
Decisions in bothstoppedAppsRecoveryandpeerNotificationare now made per-component on the actualcontainerData, instead of treating the whole app asg:if any sibling is. The 30-minute install grace also
stops bleeding onto non-syncthing components. - Larger / longer uploads — formidable body limit raised from 5 GB → 10 GB, and the Node HTTP server
requestTimeoutis set to 2 hours so app backups and FluxShare uploads survive slow connections.
headersTimeoutis left at the default to keep slowloris protection on the request line. - Docker image size limit raised to 5 GB —
config.fluxapps.maxImageSizebumped from 2 GB to 5 GB for all apps. Allows larger images network-wide without per-owner gating. appSpawnercadence and deferral bug fixes — three related bugs that explain why apps could take up to 30 min to install on allowlisted enterprise nodes even though enterprise nodes are supposed to spawn-tick
every 60s.- Version bump to 8.12.0.
Changes
g: syncthing — per-component, not per-app
stoppedAppsRecovery: newgetNonGComponentIdentifiers()partitions an app's components and starts only the non-g:ones viaappDockerStart(componentName_appName). Apps where every component isg:are still
skipped entirely (left tomasterSlaveApps). Adds anappsPartiallyStartedbucket to results for mixed apps.peerNotification: classifies the specific stopped component (not the app) for two decisions — (a) whether to callhandleMissingMasterSlaveContainer(only when the stopped component itself isg:), and (b)
whether the 30-minute install grace applies (onlyr:/g:components). Gracefully handles legacy v≤3 single-component specs.
Uploads
ZelBack/src/lib/fluxServer.js:server.requestTimeout = 2h.ZelBack/src/services/IOUtils.js:maxFileSize5 GB → 10 GB.
Image size limit
ZelBack/config/default.jsandtests/unit/globalconfig/default.js:fluxapps.maxImageSize2 GB → 5 GB. Single uniform limit, no plumbing changes — the existing limit is already enforced through
verifyRepositoryandverifyAndPullImage.
appSpawner fixes (ZelBack/src/services/appLifecycle/appSpawner.js)
- Spawn cadence honored on empty-candidate paths.
getSpawnDelays(isEnterprise=true, …)returnsdelayTime: 60_000, but the two early-exit branches intrySpawningGlobalApplicationwere sleeping on a hardcoded
30 * 60 * 1000instead ofdelayTime. On a quiet enterprise node every spawn cycle was 30 min apart in production logs (00:27:25 → 00:57:25 → 01:27:25 → …), bounding registration→install latency at ~30 min
instead of the intended ~60s. Both hardcodes replaced withdelayTime. Non-enterprise nodes unaffected —getSpawnDelays(false, 0)already returns30 * 60 * 1000. - Inverted
timeToCheckcomparison.appsToBeCheckedLater/appsSyncthingToBeCheckedLaterentries are pushed with a futuretimeToCheckso the spawner can come back to them later. The lookup was
findIndex((app) => app.timeToCheck >= Date.now()), which matches entries whose timer has not elapsed — opposite of intent. Effect: deferred entries were immediately re-pulled on the next iteration, bypassing every
"will check in around 27m / 2m" deferral. Both comparisons flipped to<=. .includes(predicate)was a silent no-op.Array.prototype.includesdoes a SameValueZero equality check, not a predicate match, soappsToBeCheckedLater.includes((appAux) => appAux.appName === app.name)was
alwaysfalseand the filter never excluded anything. Replaced with.some(predicate).
Tests
- New per-component
g:cases intests/unit/peerNotification.test.js(mixed compose: non-g:auto-starts,g:sibling left alone;r:-only path). - New cases in
tests/unit/stoppedAppsRecovery.test.jscovering mixed-compose partial start. - Small fix in
tests/unit/syncthingHealthMonitor.test.jsto expectsendMessage=trueon the remove-threshold path.
Test plan
- Unit:
npm run test:zelback:unit— newpeerNotification,stoppedAppsRecovery, andsyncthingHealthMonitorcases pass;appSpawner/enterpriseNetwork/globalStatesuites pass (45/45). - Boot recovery: install a mixed compose app (one
g:component, one plain). Stop all containers, restart FluxOS, confirm only the non-g:component auto-starts and theg:component remains for
masterSlaveAppsto elect a primary. - Runtime restart: with the same app running, manually stop the non-
g:component; confirmpeerNotificationauto-restarts it without the 30-minute grace. - All-
g:app: confirm it's still skipped end-to-end (no regression in master/slave behavior). - Legacy v≤3 app with
g:containerData: confirm it's still skipped. - FluxShare upload of a >5 GB file (e.g. 7–8 GB) succeeds.
- Large upload over a slow link runs past the previous 5-minute
requestTimeoutwithout being killed. - Validate a 3–5 GB Docker image deploy succeeds; a >5 GB image is rejected.
- Deploy to an enterprise node and confirm the spawn loop logs
trySpawningGlobalApplication - Checking for apps...at ~60s cadence instead of every 30 min. - Register a fresh enterprise app and confirm the install lands on an allowlisted node within ~2 min of broadcast propagation.