github skye-harris/hass_local_openai_llm 1.1.0

latest releases: 1.2.4, 1.2.3, 1.2.2...
one month ago

Added support for parallel tool calling, which allows the LLM to call multiple tools simultaneously in a single turn

  • Requires support from the inference server
  • In some cases, this is controlled by the server itself, and this toggle will not have any effect
  • Enabled by default for all agents, can be disabled in agent configuration if required

Fixed an issue where, if you disable a tooling provider, you may have been unable to reconfigure your LLM Agent

Full Changelog: 1.0.6...1.1.0

Don't miss a new hass_local_openai_llm release

NewReleases is sending notifications on new releases.