Changelog
- d2e4639 feat(registry): add context length and update max tokens for Claude model configurations
- 72c7ef7 fix(translator): handle non-JSON output parsing for OpenAI function responses
- 7e30157 Fixed: #354
- 0832122 Merge pull request #340 from nestharus/fix/339-thinking-openai-gemini-compat
- e73cdf5 fix(claude): ensure max_tokens exceeds thinking budget for thinking models