github Mintplex-Labs/anything-llm v1.11.1
AnythingLLM v1.11.1

9 hours ago

Homepage Redesign

The main AnythingLLM homepage has been completely redesigned to be more modern and user-friendly so you can instantly start chatting the second you open the app after onboarding.

homepage

Native Tool Calling

Native tool calling is the best performance and experience for tool calling with your LLM provider and model. If you can enable it, you should.

this only applies to local LLM providers. It has no impact on cloud LLMs like OpenAI, Anthropic, or Azure.

We have completely overhauled how @agent tool calling works. Now, we will leverage the new native tool calling abilities of your LLM provider and model.

What this means for you:

  • You can now run complex, multi-step tool calls with your LLM provider and model.
  • Your model will now continue to work until your final response is generated or determined to be complete.
  • You will get 100x better responses from even small tool-calling models

We have implemented safeguards as well to prevent infinite loops with a maximum of 10 tool calls per response to prevent runaway tasks.

native-tool-calling

Limitations

Most providers do not allow us to probe for if a model supports native tool calling.

The following local LLM providers will automatically support native tool calling if your model supports it:

  • Default Built in LLM Provider (AnythingLLM Default)
  • Ollama
  • LM Studio

For others, you will need to set an ENV variable to enable native tool calling for supported providers.

  • Generic OpenAI
  • Groq
  • AWS Bedrock
  • Lemonade
  • LiteLLM
  • Local AI
  • OpenRouter

This can be set via the PROVIDER_SUPPORTS_NATIVE_TOOL_CALLING environment variable.

PROVIDER_SUPPORTS_NATIVE_TOOL_CALLING="bedrock,generic-openai,groq,lemonade,litellm,local-ai,openrouter"

Lemonade by AMD Integration

lemonade

Lemonade by AMD is an open-source local model runtime that optimizes performance and efficiency for local models (LLM, ASR, TTS, Image Generation, etc.) for all types of hardware including AMD GPUs and NPUs.

We have added first class support so you can use your local models running via Lemonade within AnythingLLM for the best application experience on top of your local hardware.


What's Changed

New Contributors

Full Changelog: v1.11.0...v1.11.1

Don't miss a new anything-llm release

NewReleases is sending notifications on new releases.