Langfuse Observability (Opt-in Build)
Note: Langfuse is not included in the default build. You must build with npm run build:langfuse to enable it. The standard npm run build has zero bundle size impact.
Added optional Langfuse integration for monitoring LLM calls, tool usage, and workflow execution.
What's Traced
- Chat: Per-message traces with model, token usage, cost, tool calls, and RAG/Web Search usage
- Workflow execution: End-to-end traces with per-node spans
- AI workflow generation: Traces for AI-assisted workflow creation
- CLI providers: Gemini CLI / Claude CLI / Codex CLI chat calls
- MCP tools: Spans for external MCP server tool executions
- RAG sync: smartSync execution traces
- Chat compaction: Conversation summarization traces
Cost Tracking
- Built-in per-token pricing for all Gemini 2.5 / 3 series models
- Google Search Grounding cost estimated per prompt (Gemini 3: $14/1K, Gemini 2.x: $35/1K)
Privacy
- Prompt and response logging disabled by default (sent as [redacted])
- Can be individually enabled in settings
Build
- npm run build — No Langfuse (default, zero bundle size impact)
- npm run build:langfuse — Includes Langfuse SDK with Obsidian/Electron compatibility patch
