RubyLLM 1.2.0: Universal OpenAI Compatibility & Custom Models
This release massively expands RubyLLM's reach by adding support for ANY provider with an OpenAI-compatible API. Connect to Azure OpenAI, local models, API proxies, or any service that speaks the OpenAI protocol.
See it in action connecting to a local Ollama server: Watch Demo
🌟 Major Features
- Universal OpenAI Compatibility: Connect to ANY OpenAI-compatible endpoint:
- Azure OpenAI Service
- Self-hosted models (Ollama, LM Studio)
- API proxies (LiteLLM, FastAI)
- Local deployments
- Custom fine-tunes
RubyLLM.configure do |config|
# Works with any OpenAI-compatible API
config.openai_api_base = "https://your-endpoint/v1"
config.openai_api_key = ENV.fetch('YOUR_API_KEY', nil) # if needed
end
- Unlisted Model Support: For providers or models not in the official registry (needed when using
openai_api_base
)
# FIRST: Always try refreshing the model registry
RubyLLM.models.refresh!
# For truly custom/local providers that won't be in the registry:
chat = RubyLLM.chat(
model: "my-local-model",
provider: :openai, # Required for custom endpoints
assume_model_exists: true
)
# or during a chat
chat.with_model(model: "my-local-model", provider: :openai, assume_exists: true)
🚀 Improvements
- Default Model Update: Switched default model to
gpt-4.1-nano
- New Models: Added support for:
- LearnLM 2.0 Flash
- O4 Mini models
- Claude 3.7 Sonnet (with proper Bedrock support)
- Fixes:
- Fixed temperature normalization for O-series models
- Added proper support for Bedrock inference profiles
- Documentation: Completely rewrote documentation
⚠️ Important Notes
- New Models? Always try
RubyLLM.models.refresh!
first -assume_model_exists
is for custom/local providers - The
provider
parameter is mandatory when usingassume_model_exists
- Deprecated "Computer Use Preview" models removed from registry
gem 'ruby_llm', '~> 1.2.0'
Merged Pull Requests
- Fix Claude 3.7 Sonnet in Bedrock by @tpaulshippy in #117
Full Changelog: 1.1.2...1.2.0