github valentinfrlch/ha-llmvision v1.5.1
New Providers, Bug Fixes & Improvements

latest releases: v1.6.0-alpha.1, v1.5.2, v1.5.2-rc.1...
2 months ago

1.5.1 Release Notes

⚠️ Please read the breaking changes before updating!

This update adds native support for OpenRouter and Azure, fixes an issue with Google Gemini, adds support for GPT-5 models and adds Dutch and Polish translations.

Contributors

A huge thank you to our contributors @TheRealFalseReality, @NikNikovsky, @Minionguyjpro, @meceonzo—and of course everyone who helped test and provided feedback for the beta versions!

⚠️ Breaking Changes

  • Timeline snapshots have moved from /www to /media. add the following to your configuration.yaml:

    # Add llmvision /media folder
    homeassistant:
      media_dirs:
        media: /media
        llmvision: /config/media/llmvision

    💡 Make sure to create the /media and the /llmvision inside it before restarting!
    💡 Update LLM Vision Card to the latest version! Otherwise snapshots won't show correctly.

  • max_tokens is now displayed as a box type number selector rather than a slider. This change accommodates newer models that use thinking parameters counting toward max_tokens. Unlike the limited slider, the box allows for inputting much larger numbers.

    💡If you notice empty responses, consider increasing max_tokens.

Integration

✨ Features

  • New Provider: LLM Vision now supports OpenRouter natively. (@valentinfrlch)
  • New Provider: LLM Vision now supports Azure natively (#64, #103, #144). (@valentinfrlch)
  • Support for Polish language: Added 🇵🇱 Polish translations. Thank you @NikNikovsky!
  • Support for Dutch language: Added 🇳🇱 Dutch translations. Thank you @Minionguyjpro!

🔧 Improvements & Fixes

  • Google Gemini fix: Fixed an infinite retry loop in the Google Gemini provider and corrected an issue where the fallback provider wasn't being used properly. (#398, #262), (@valentinfrlch)
  • Support for GPT-5 models: Remove temperature and top_p from request when a gpt-5 model is used (#437), (@valentinfrlch)
  • Moved Timeline snapshots: Snapshots are now stored securely behind Home Assistant authentication in /media. LLM Vision will attempt to migrate previously captured snapshots to the new folder.
  • Remove max_tokens limit: max_tokens is now displayed as a box type number selector rather than a slider. This change allows for larger token limits, as thinking tokens count toward the max_tokens limit. (@valentinfrlch)
  • Gemini key logging: Fixed a security issue that logged the Google API key in clear text (#334), (@valentinfrlch)
  • Missing import: Add aiofile package to imports (#404), (@valentinfrlch)
  • Sanitize IP address: Ignore any protocol input for the IP address field for Ollama and Open WebUI providers. Thank you @meceonzo!
  • Fallback provider: Fixed a retry logic bug that caused an exception when no fallback provider was configured. (#461)
  • Title generation: Fixed an issue that caused an exception when generate_title was not set. (#463)

Blueprint

Update the blueprint by re-importing it from the Blueprint settings page in Home Assistant.

https://github.com/valentinfrlch/ha-llmvision/blob/main/blueprints/event_summary.yaml

A huge thank you to @TheRealFalseReality for maintaining the blueprint!

✨ Features

  • See snapshot quick action: Notification now shows button to preview the snapshot in a browser.

🔧 Improvements & Fixes

  • Multiple Cameras: Fixed an issue that displayed the wrong camera (#403)
  • Live Preview notification (iOS only): Preview mode can now be customized before and after analysis.
  • Time format: Remove leading zero from 12-hour format.
  • New media folder: Changed snapshot location from /www to /media/llmvision/snapshots (#457)

Don't miss a new ha-llmvision release

NewReleases is sending notifications on new releases.