OpenAI-compatible LLM provider lands the universal-adapter shape — one config (OPENAI_API_KEY + OPENAI_BASE_URL + OPENAI_MODEL) covers OpenAI, Azure OpenAI (auto-detected from hostname), DeepSeek, SiliconFlow, vLLM, LM Studio, Ollama (via /v1), and any future endpoint mirroring POST /v1/chat/completions. Worker telemetry now pins a stable project_name so engine metrics + traces attribute cleanly. agent-memory.dev Compare section no longer wraps awkwardly.
Added
-
OpenAI-compatible LLM provider (#307, @fatinghenji). Closes #185, #232 (Ollama works via
OPENAI_BASE_URL=http://localhost:11434/v1), #312, supersedes #240. -
Azure OpenAI auto-detection.
.openai.azure.comhostname → swapsAuthorization: Bearerforapi-key, drops/v1path prefix, appendsapi-version=<version>query (default2024-08-01-preview, override viaOPENAI_API_VERSION). -
OPENAI_TIMEOUT_MSenv var. AbortController-bounded fetch, default 60s, clear timeout error with the env-var hint. Other raw-fetch providers tracked in #373. -
OPENAI_REASONING_EFFORTpassthrough. Forwarded asreasoning_efforton the request body for OpenAI reasoning models (o1, o3, gpt-*-reasoning) and providers that mirror that schema. Standard chat models reject the field with 400 — README documents the caveat. Falls back tomessage.reasoningwhenmessage.contentis empty (Ollama Cloud thinking-model shape).
Changed
-
telemetry.project_namepinned to"agentmemory"(#426). iii-sdk auto-detection produces inconsistent identifiers per host (agentmemory,node,npm, occasionally the user's home dir basename via npx). Pinning gives every install the same stable identifier in engine metrics + traces. Also pinslanguageandframework. -
OPENAI_API_KEY_FOR_LLM=falseopt-out.detectLlmProviderKindnow mirrorsdetectProvider's existing gate — users who setOPENAI_API_KEYonly for embeddings won't see the LLM auto-activate. README leads with an explicit shared-use callout. -
Compare section (#427). Title
AGENTMEMORY VS. THE FIELD.→VS. THE FIELD.(eyebrow already says VS.).text-wrap: balanceglobally on.section-title.NATIVE PLUGINScell6 (Claude/Codex/OpenClaw/Hermes/pi/OpenHuman)→6(names already shown in Agents grid). Row grid rebalanced +word-break: break-word+ 24px padding so cells likeYES (APACHE-2.0)have breathing room.
Install
npm install -g @agentmemory/agentmemory@0.9.17
agentmemoryTry with any OpenAI-compatible endpoint:
# Standard OpenAI
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4o-mini
# DeepSeek
OPENAI_API_KEY=sk-...
OPENAI_BASE_URL=https://api.deepseek.com
OPENAI_MODEL=deepseek-chat
# Local Ollama
OPENAI_API_KEY=ollama
OPENAI_BASE_URL=http://localhost:11434/v1
OPENAI_MODEL=llama3.1:8b
# Azure (auto-detected by hostname)
OPENAI_API_KEY=...
OPENAI_BASE_URL=https://my-resource.openai.azure.com/openai/deployments/gpt-4o
OPENAI_API_VERSION=2024-08-01-previewFull changelog: https://github.com/rohitg00/agentmemory/blob/main/CHANGELOG.md#0917--2026-05-16