RubyLLM 1.8.1: Efficient Chat Streaming 🚀💬
Small release bringing production-ready streaming for Rails chat UIs and minor fixes.
💬 Production-Ready Chat Streaming
Improved the chat UI generator with efficient chunk streaming that reduces bandwidth usage:
- Bandwidth optimized: New
broadcast_append_chunk
method appends individual chunks without re-transmitting entire messages - Single subscription: Maintains one Turbo Stream subscription at chat level (not per message)
- Reduced overhead: Jobs now append chunks instead of updating entire messages, reducing database writes
🔧 Improvements & Fixes
- Cleaner injection: Refined message model class injection for chat UI generator
- GPT-5 capabilities: Fixed missing capability declarations for parsera compatibility
- Funding support: Added funding URI to gemspec metadata
- Documentation updates: Enhanced README and moderation guides
- Model registry: Latest model updates across all providers
- Dependency updates: Updated Appraisal gemfiles
Installation
gem 'ruby_llm', '1.8.1'
Upgrading from 1.8.0
bundle update ruby_llm
All changes are backward compatible. To benefit from the streaming improvements, regenerate your chat UI with rails generate ruby_llm:chat_ui
.
Merged PRs
New Contributors
Full Changelog: 1.8.0...1.8.1