Added support for parallel tool calling, which allows the LLM to call multiple tools simultaneously in a single turn
- Requires support from the inference server
- In some cases, this is controlled by the server itself, and this toggle will not have any effect
- Enabled by default for all agents, can be disabled in agent configuration if required
Fixed an issue where, if you disable a tooling provider, you may have been unable to reconfigure your LLM Agent
Full Changelog: 1.0.6...1.1.0