RubyLLM 1.1.2: GPT-4.1 Million-Token Support & New Model Families
This release adds support for OpenAI's GPT-4.1 family with its massive context window, along with several new model families from Google and other providers.
🎯 New Features
- GPT-4.1 Support: Full integration of the GPT-4.1 family with support for:
- Million-token context window (1,047,576 tokens)
- Vision capabilities
- Function calling
- Structured output
- Cached input pricing optimization
chat = RubyLLM.chat(model: "gpt-4.1") # Standard model
chat = RubyLLM.chat(model: "gpt-4.1-mini") # Cost-optimized version
chat = RubyLLM.chat(model: "gpt-4.1-nano") # Minimal pricing version
- New Google Models: Added support for latest Gemini models:
- Gemini 2.0 Flash Live for real-time interactions
- Gemini 2.5 Pro Preview with enhanced capabilities
- Gemma 3 for open model support
- Veo 2 model family
💎 Improvements
- Enhanced capabilities detection for new model families
This is an optional update unless you need support for the new model families.
gem 'ruby_llm', '~> 1.1.2'
Full Changelog: 1.1.1...1.1.2