github crmne/ruby_llm 1.3.0

latest releases: 1.6.4, 1.6.3, 1.6.2...
3 months ago

RubyLLM 1.3.0: Just When You Thought the Developer Experience Couldn't Get Any Better 🎉

I'm thrilled to ship RubyLLM 1.3.0 stable! Just when you thought RubyLLM's developer experience couldn't get any better, we've made attachments ridiculously simple, added isolated configuration contexts, and officially ended the era of manual model tracking.

📎 The Attachment Revolution: From Complex to Magical

The biggest transformation in 1.3.0 is how stupidly simple attachments have become. Before, you had to categorize every file:

# The old way (still works, but why would you?)
chat.ask("What's in this image?", with: { image: "diagram.png" })
chat.ask("Describe this meeting", with: { audio: "meeting.wav" })
chat.ask("Summarize this document", with: { pdf: "contract.pdf" })

Now? Just throw files at it and RubyLLM figures out the rest:

# The new way - pure magic ✨
chat.ask("What's in this file?", with: "diagram.png")
chat.ask("Describe this meeting", with: "meeting.wav") 
chat.ask("Summarize this document", with: "contract.pdf")

# Multiple files? Mix and match without thinking
chat.ask("Analyze these files", with: [
  "quarterly_report.pdf",
  "sales_chart.jpg", 
  "customer_interview.wav",
  "meeting_notes.txt"
])

# URLs work too
chat.ask("What's in this image?", with: "https://example.com/chart.png")

This is what the Ruby way looks like: you shouldn't have to think about file types when the computer can figure it out for you.

🔄 Isolated Configuration with Contexts

Building multi-tenant applications just got trivial. Contexts let you create isolated configuration scopes without touching your global settings:

# Each tenant gets their own isolated configuration
tenant_context = RubyLLM.context do |config|
  config.openai_api_key = tenant.openai_key
  config.anthropic_api_key = tenant.anthropic_key
  config.request_timeout = 180 # This tenant needs more time
end

# Use it without polluting the global namespace
response = tenant_context.chat.ask("Process this customer request...")

# Global configuration remains untouched
RubyLLM.chat.ask("This still uses your default settings")

Perfect for multi-tenancy, A/B testing different providers, environment targeting, or any situation where you need temporary configuration changes.

🚂 Rails Integration That Finally Feels Like Rails

The Rails integration now works seamlessly with ActiveStorage:

# Enable attachment support in your Message model
class Message < ApplicationRecord
  acts_as_message
  has_many_attached :attachments # Add this line
end

# Handle file uploads directly from forms
chat_record.ask("Analyze this upload", with: params[:uploaded_file])

# Work with existing ActiveStorage attachments
chat_record.ask("What's in my document?", with: user.profile_document)

# Process multiple uploads at once
chat_record.ask("Review these files", with: params[:files])

We've brought the Rails attachment handling to complete parity with the plain Ruby implementation. No more "it works in Ruby but not in Rails" friction.

💻 Local Models with Ollama

Your development machine shouldn't need to phone home to OpenAI every time you want to test something:

RubyLLM.configure do |config|
  config.ollama_api_base = 'http://localhost:11434/v1'
end

# Same API, different model
chat = RubyLLM.chat(model: 'mistral', provider: 'ollama')
response = chat.ask("Explain Ruby's eigenclass")

Perfect for privacy-sensitive applications, offline development, or just experimenting with local models.

🔀 Hundreds of Models via OpenRouter

Access models from dozens of providers through a single API:

RubyLLM.configure do |config|
  config.openrouter_api_key = ENV['OPENROUTER_API_KEY']
end

# Access any model through OpenRouter
chat = RubyLLM.chat(model: 'anthropic/claude-3.5-sonnet', provider: 'openrouter')

One API key, hundreds of models. Simple.

🌐 The End of Manual Model Tracking

We've partnered with Parsera to create a single source of truth for LLM capabilities and pricing. When you run RubyLLM.models.refresh!, you're now pulling from the Parsera API - a continuously updated registry that scrapes model information directly from provider documentation.

No more manually updating capabilities files every time OpenAI changes their pricing. No more hunting through documentation to find context windows. It's all there, always current.

Read more about this revolution in my blog post.

🔢 Custom Embedding Dimensions

Fine-tune your embeddings for specific use cases:

# Generate compact embeddings for memory-constrained environments
embedding = RubyLLM.embed(
  "Ruby is a programmer's best friend",
  model: "text-embedding-3-small",
  dimensions: 512  # Instead of the default 1536
)

🏢 Enterprise OpenAI Support

Organization and project IDs are now supported:

RubyLLM.configure do |config|
  config.openai_api_key = ENV['OPENAI_API_KEY']
  config.openai_organization_id = ENV['OPENAI_ORG_ID']
  config.openai_project_id = ENV['OPENAI_PROJECT_ID']
end

📊 Official Version Support

We now officially support and test against:

  • Ruby 3.1 to 3.4
  • Rails 7.1 to 8.0

Your favorite Ruby version is covered.

Upgrade Today

gem 'ruby_llm', '1.3.0'

As always, we've maintained full backward compatibility. Your existing code continues to work exactly as before, but now with magical attachment handling and powerful new capabilities.

Read the full story in my blog post and check out the complete model guide.

Merged PRs

  • Support Embedding Dimensions by @papgmez in #73
  • Updated acts_as_* helpers to use canonical 'rails-style' foreign keys by @timaro in #151
  • Add support for logging to file via configuration by @rhys117 in #148
  • Use foreign_key instead of to_s for acts_as methods by @bborn in #157
  • Fix inflector by @bborn in #159
  • Handle OpenAI organization and project IDs by @xymbol in #162
  • Don't call blob on a blob by @bborn in #164
  • fix: prevent with_instructions from throwing error by @roelbondoc in #176
  • replace Utils.deep_symbolize_keys with native symbolize_names: true by @max-power in #179
  • [Gemspec] - Allow lower faraday versions by @itstheraj in #173
  • feat(Configuration): Add http_proxy configuration option by @stadia in #189
  • Support gpt-image-1 by @tpaulshippy in #201
  • Issue 203: Fix model alias matching for provider prefixes (claude-3-7-sonnet support) by @Sami-Tanquary in #206
  • Update tools.md by @seemiller in #212
  • Use consistent name for Weather tool in docs by @seemiller in #215

New Contributors

Full Changelog: 1.2.0...1.3.0

Don't miss a new ruby_llm release

NewReleases is sending notifications on new releases.