github mnfst/manifest manifest@5.49.1
manifest v5.49.1

latest release: manifest@5.49.2
3 hours ago

🐛 Patch Changes

  • 19f6b0e: Four small Docker-compose quality-of-life fixes, all verified against an existing install without data loss:

    • Project name pinned to mnfst. Docker Compose used to infer the project name from the install directory's basename (typically manifest), so two unrelated projects both happening to live in a manifest/ directory would silently share container namespace — the user who reported this saw a Found orphan containers warning from a completely unrelated container. Added name: mnfst at the top of docker/docker-compose.yml. Container names move from manifest-manifest-1 / manifest-postgres-1 to mnfst-manifest-1 / mnfst-postgres-1.
    • pgdata volume name pinned to manifest_pgdata. With the project rename, Docker would have created a fresh empty mnfst_pgdata volume on next up, orphaning every existing self-hoster's database. Pinning volumes.pgdata.name to the historical manifest_pgdata keeps the new compose file attaching to the existing data. Verified locally: tore down an existing manifest stack, booted the new file from a different directory, confirmed the mnfst-postgres-1 container mounted manifest_pgdata with all 51 migrations intact.
    • Healthcheck start_period 45s → 90s. On a cold first pull, Docker was flipping the container to unhealthy before the backend had finished pulling images + running migrations + warming the pricing cache. The 90s grace gives real installs room to boot.
    • Log rotation. Default Docker json-file logging is unbounded — a long-running install can silently fill the host disk. Both services now cap at 5 × 10 MB per container (~50 MB ceiling each).

    CI: added an install-script job in docker-smoke.yml that runs the actual docker/install.sh end-to-end against the PR-built image. Caught the ${p} healthcheck-escape regression retroactively — and will catch the next one before it ships. The installer now reads its source from MANIFEST_INSTALLER_SOURCE (defaults to main on GitHub), so the CI job can point it at a local HTTP server serving the branch under test.

  • 321a644: Route OpenAI Codex, -pro, o1-pro, and deep-research models to /v1/responses for API-key users. Closes #1660.

    OpenAI's gpt-5.3-codex, gpt-5-codex, gpt-5.1-codex*, gpt-5.2-codex, gpt-5-pro, gpt-5.2-pro, o1-pro, and o4-mini-deep-research only accept api.openai.com/v1/responses — they return HTTP 400 "not a chat model" on /v1/chat/completions. Manifest's subscription path already routed these correctly via the ChatGPT Codex backend, but API-key users always hit /v1/chat/completions and failed. Prod telemetry: 31 distinct users attempting Codex on API keys over the last 90 days, 36% error rate, one user stuck in a 1,400-call retry loop at 98% failure.

    • New openai-responses provider endpoint targeting api.openai.com/v1/responses, reusing the existing chatgpt format (same toResponsesRequest / convertChatGptResponse converters used by the subscription path — just with a plain Authorization: Bearer header instead of the Codex-CLI masquerade).
    • ProviderClient.resolveEndpoint swaps openaiopenai-responses at forward time for any model matching the Responses-only regex. Subscription OAuth still routes to openai-subscription as before; custom endpoints are never overridden.
    • Model discovery no longer drops Codex/-pro/o1-pro/deep-research — they're kept so users can select them and the proxy routes them transparently. gpt-image-* is moved to the non-chat filter (it was only incidentally caught by the old Responses-only filter; it's image generation, not a chat model).
    • OPENAI_RESPONSES_ONLY_RE moved to common/constants/openai-models.ts with a shared stripVendorPrefix helper, so discovery and the proxy read the same source of truth without cross-module coupling.

Don't miss a new manifest release

NewReleases is sending notifications on new releases.