RubyLLM 1.6.0: Custom Headers, Tool Control, and Provider Classes 🚀
Unlock provider beta features, build sophisticated agent systems, and enjoy a cleaner architecture. Plus a complete documentation overhaul with dark mode!
✨ Custom HTTP Headers
Access cutting-edge provider features that were previously off-limits:
# Enable Anthropic's beta streaming features
chat = RubyLLM.chat(model: 'claude-3-5-sonnet')
.with_headers('anthropic-beta' => 'fine-grained-tool-streaming-2025-05-14')
# Works with method chaining
response = chat.with_temperature(0.5)
.with_headers('X-Custom-Feature' => 'enabled')
.ask("Stream me some tool calls")
Your headers can't override authentication - provider security always wins. As it should be.
🛑 Tool Halt: Skip LLM Commentary (When Needed)
For rare cases where you need to skip the LLM's helpful summaries, tools can now halt continuation:
class SaveFileTool < RubyLLM::Tool
description "Save content to a file"
param :path, desc: "File path"
param :content, desc: "File content"
def execute(path:, content:)
File.write(path, content)
halt "Saved to #{path}" # Skip the "I've successfully saved..." commentary
end
end
Note: Sub-agents work perfectly without halt! The LLM's summaries are usually helpful:
class SpecialistAgent < RubyLLM::Tool
description "Delegate to specialist for complex questions"
param :question, desc: "The question to ask"
def execute(question:)
expert = RubyLLM.chat(model: 'claude-3-opus')
expert.ask(question).content # Works great without halt
# The router will summarize the expert's response naturally
end
end
Only use halt
when you specifically need to bypass the LLM's continuation.
🎯 New Models & Providers
Latest Models
- Opus 4.1 - Anthropic's most capable model now available
- GPT-OSS and GPT-5 series - OpenAI's new lineup
- Switch to
gpt-image-1
as default image generation model
OpenAI-Compatible Server Support
Need to connect to a server that still uses the traditional 'system' role?
RubyLLM.configure do |config|
config.openai_use_system_role = true # Use 'system' role for compatibility
config.openai_api_base = "http://your-server:8080/v1"
end
By default, RubyLLM uses 'developer' role (matching OpenAI's current API). Enable this for older servers.
🏗️ Provider Architecture Overhaul
Providers are now classes, not modules. What this means for you:
- No more credential hell - Ollama users don't need OpenAI keys
- Per-instance configuration - Each provider manages its own state
- Cleaner codebase - No global state pollution
# Before: "Missing configuration for OpenAI" even if you only use Ollama
# Now: Just works
chat = RubyLLM.chat(model: 'llama3:8b', provider: :ollama) # No OpenAI key required
🚂 Tool Callbacks
We have a new callback for tool results. Works great even in Rails!
class Chat < ApplicationRecord
acts_as_chat
# All methods available
chat.with_headers(...)
chat.on_tool_call { |call| ... }
chat.on_tool_result { |result| ... } # New callback!
end
📚 Documentation Overhaul
Complete documentation reorganization with:
- Dark mode that follows your system preferences
- New Agentic Workflows guide for building intelligent systems
- Four clear sections: Getting Started, Core Features, Advanced, Reference
- Tool limiting patterns for controlling AI behavior
Check it out at rubyllm.com
🔍 Better Debugging
New stream debug mode for when things go sideways:
# Via environment variable
RUBYLLM_STREAM_DEBUG=true rails server
# Or in config
RubyLLM.configure do |config|
config.log_stream_debug = true
end
Shows every chunk, accumulator state, and parsing decision. Invaluable for debugging streaming issues.
🐛 Bug Fixes
- JRuby fixes
- Rails callback chaining fixes
- Anthropic models no longer incorrectly claim structured output support
- Test suite improved with proper provider limitation handling
- Documentation site Open Graph images fixed
Installation
gem 'ruby_llm', '1.6.0'
Full backward compatibility. Your existing code continues to work. New features are opt-in.
Merged PRs
- Wire up on_tool_call when using acts_as_chat rails integration by @agarcher in #318
- Switch default image generation model to gpt-image-1 by @tpaulshippy in #321
- Update which OpenAI models are considered "reasoning" by @gjtorikian in #334
New Contributors
- @agarcher made their first contribution in #318
- @gjtorikian made their first contribution in #334
Full Changelog: 1.5.1...1.6.0