RubyLLM 1.12: Agents + Full Cloud Provider Coverage + New instructions semantics and contributor guidelines 🎉🤖☁️
This is a big one.
RubyLLM 1.12 brings a new Agent interface and concludes cloud provider coverage:
- GCP coverage via Vertex AI (already supported)
- New: full AWS coverage via Bedrock Converse API
- New: full Azure coverage via Azure AI Foundry API
🤖 New Agent Interface
Agents are now a first-class way to define reusable AI behavior once and use it everywhere.
class WorkAssistant < RubyLLM::Agent
chat_model Chat
model "gpt-4.1-nano"
instructions "You are a concise work assistant."
tools TodoTool, GoogleDriveSearchTool
endUse it directly:
response = WorkAssistant.new.ask("What should I work on today?")Or with Rails-backed chats:
chat = WorkAssistant.create!(user: current_user)
WorkAssistant.find(chat.id).completePrompt conventions are built in (app/prompts/<agent_name>/instructions.txt.erb).
More on agents: https://rubyllm.com/agents
☁️ Bedrock Converse API: Full Bedrock Coverage
RubyLLM now uses Bedrock Converse API, which means every Bedrock chat model is supported through one consistent path.
chat = RubyLLM.chat(
model: "anthropic.claude-haiku-4-5-20251001-v1:0",
provider: :bedrock
)
response = chat.ask("Give me three ideas for reducing API latency.")If it runs on Bedrock, RubyLLM can talk to it.
☁️ Azure Foundry API Support
RubyLLM now supports Azure Foundry AI, giving you broad model access on Azure with the same RubyLLM interface.
RubyLLM.configure do |config|
config.azure_api_key = ENV["AZURE_API_KEY"]
config.azure_api_base = ENV["AZURE_API_BASE"]
end
chat = RubyLLM.chat(model: "gpt-4.1", provider: :azure)
response = chat.ask("Summarize this architecture in one paragraph.")Same API, Azure-wide model availability.
🧠 Instruction Semantics Improved
with_instructions behavior is now clearer:
- default call replaces the active system instruction
- append behavior is explicit
- instructions are always sent before other messages
chat.with_instructions("You are concise.")
chat.with_instructions("Use bullet points.", append: true)🤝 Contributor + Provider Guidance Expanded
We clarified how contributions should flow so reviews are faster and less surprising.
What we ask now:
- Open an issue first and wait for maintainer feedback before coding new features.
- Keep PRs focused and reasonably sized.
- If you used AI tooling, you still own the code: understand every line before opening the PR.
Provider-specific direction is also clearer:
- Core providers have a high acceptance bar.
- For smaller or emerging providers, we usually prefer a community gem over adding it to RubyLLM core.
Net effect: less churn in review, clearer expectations up front.
📚 Docs & DX Polishes
A bunch of quality-of-life improvements shipped alongside core features:
- updated guides around agents and configuration
- docs UX improvements (copy page button, dark mode polish)
Installation
gem "ruby_llm", "1.12.0"Upgrading from 1.11.x
bundle update ruby_llmFull Changelog: 1.11.0...1.12.0