🐛 Patch Changes
-
19f6b0e: Four small Docker-compose quality-of-life fixes, all verified against an existing install without data loss:
- Project name pinned to
mnfst. Docker Compose used to infer the project name from the install directory's basename (typicallymanifest), so two unrelated projects both happening to live in amanifest/directory would silently share container namespace — the user who reported this saw aFound orphan containerswarning from a completely unrelated container. Addedname: mnfstat the top ofdocker/docker-compose.yml. Container names move frommanifest-manifest-1/manifest-postgres-1tomnfst-manifest-1/mnfst-postgres-1. pgdatavolume name pinned tomanifest_pgdata. With the project rename, Docker would have created a fresh emptymnfst_pgdatavolume on nextup, orphaning every existing self-hoster's database. Pinningvolumes.pgdata.nameto the historicalmanifest_pgdatakeeps the new compose file attaching to the existing data. Verified locally: tore down an existingmanifeststack, booted the new file from a different directory, confirmed themnfst-postgres-1container mountedmanifest_pgdatawith all 51 migrations intact.- Healthcheck
start_period45s → 90s. On a cold first pull, Docker was flipping the container tounhealthybefore the backend had finished pulling images + running migrations + warming the pricing cache. The 90s grace gives real installs room to boot. - Log rotation. Default Docker
json-filelogging is unbounded — a long-running install can silently fill the host disk. Both services now cap at 5 × 10 MB per container (~50 MB ceiling each).
CI: added an
install-scriptjob indocker-smoke.ymlthat runs the actualdocker/install.shend-to-end against the PR-built image. Caught the${p}healthcheck-escape regression retroactively — and will catch the next one before it ships. The installer now reads its source fromMANIFEST_INSTALLER_SOURCE(defaults tomainon GitHub), so the CI job can point it at a local HTTP server serving the branch under test. - Project name pinned to
-
321a644: Route OpenAI Codex,
-pro,o1-pro, and deep-research models to/v1/responsesfor API-key users. Closes #1660.OpenAI's
gpt-5.3-codex,gpt-5-codex,gpt-5.1-codex*,gpt-5.2-codex,gpt-5-pro,gpt-5.2-pro,o1-pro, ando4-mini-deep-researchonly acceptapi.openai.com/v1/responses— they return HTTP 400 "not a chat model" on/v1/chat/completions. Manifest's subscription path already routed these correctly via the ChatGPT Codex backend, but API-key users always hit/v1/chat/completionsand failed. Prod telemetry: 31 distinct users attempting Codex on API keys over the last 90 days, 36% error rate, one user stuck in a 1,400-call retry loop at 98% failure.- New
openai-responsesprovider endpoint targetingapi.openai.com/v1/responses, reusing the existingchatgptformat (sametoResponsesRequest/convertChatGptResponseconverters used by the subscription path — just with a plainAuthorization: Bearerheader instead of the Codex-CLI masquerade). ProviderClient.resolveEndpointswapsopenai→openai-responsesat forward time for any model matching the Responses-only regex. Subscription OAuth still routes toopenai-subscriptionas before; custom endpoints are never overridden.- Model discovery no longer drops Codex/-pro/o1-pro/deep-research — they're kept so users can select them and the proxy routes them transparently.
gpt-image-*is moved to the non-chat filter (it was only incidentally caught by the old Responses-only filter; it's image generation, not a chat model). OPENAI_RESPONSES_ONLY_REmoved tocommon/constants/openai-models.tswith a sharedstripVendorPrefixhelper, so discovery and the proxy read the same source of truth without cross-module coupling.
- New