RubyLLM 1.6.1: Tool Choice Freedom and Error Recovery 🛠️
Quick maintenance release with important fixes for tool calling and error recovery. Shipped three days after 1.6.0 because why make you wait?
🎉 Milestone: 2,700+ GitHub stars and 1.9 million downloads! Thank you to our amazing community.
🔧 OpenAI Tool Choice Flexibility
OpenAI defaults to 'auto' when tools are present. We were hardcoding it, blocking your overrides:
# Before: Couldn't override tool_choice
chat.with_params(tool_choice: 'required') # Ignored!
# Now: Your choice wins
chat.with_params(tool_choice: 'required') # Works as expected
chat.with_params(tool_choice: 'none') # Disable tools temporarily
chat.with_params(tool_choice: { type: 'function', function: { name: 'specific_tool' }}) # Force specific tool
Thanks to @imtyM for catching this and fixing it in #336.
🔄 Orphaned Tool Message Cleanup
Rate limits can interrupt tool execution mid-flow, leaving orphaned tool result messages. This caused 'tool_use without tool_result' errors on retry:
# The problem flow:
# 1. Tool executes successfully
# 2. Tool result message saved
# 3. Rate limit hits before assistant response
# 4. Retry fails: orphaned tool result confuses the API
# Now: Automatic cleanup on error
chat.ask("Use this tool") # Rate limit? No problem. Orphaned messages cleaned up.
The fix handles both orphaned tool calls and tool results, ensuring clean conversation state after errors.
🔄 Tool Switching Mid-Conversation
New replace: true
parameter lets you completely switch or remove tool contexts during a conversation:
# Start with search tools
chat = RubyLLM.chat.with_tools([SearchTool, WikipediaTool])
chat.ask("Research Ruby's history")
# Switch to code tools for implementation
chat.with_tools([CodeWriterTool, TestRunnerTool], replace: true)
chat.ask("Now implement a Ruby parser") # Only code tools available
# Remove all tools for pure conversation
chat.with_tools(nil, replace: true)
chat.ask("Explain your implementation choices") # No tools, just reasoning
# Add review tools when needed
chat.with_tools([LinterTool, SecurityScanTool], replace: true)
chat.ask("Review the code for issues") # Only review tools available
Perfect for multi-phase workflows where different stages need different capabilities - or no tools at all.
🐛 Additional Fixes
- JRuby compatibility: Fixed test mocks for Ruby 3.1 compatibility
- Documentation: Fixed code example in tool documentation (thanks @tpaulshippy)
- Models update: Latest model registry from all providers
- GPT-5 temperature: Fixed unsupported temperature parameter for reasoning models (#339)
Installation
gem 'ruby_llm', '1.6.1'
Full backward compatibility maintained. If you're using OpenAI tools or seeing rate limit errors, this update is recommended.
Merged PRs
- Fix small bug in doc by @tpaulshippy in #340
- Fix: remove
tool_choice: auto
param for open ai by @imtyM in #336
New Contributors
Full Changelog: 1.6.0...1.6.1