@agent
Overhaul & streaming ⚡️️
agent-streaming.mp4
When anythingllm first launched, the word "agent" was not in the vocabulary of the LLM world. Agents are quickly becoming the standard for building AI applications and also the core experience for interacting with LLMs.
For too long, due to the complexity of building agents, spotty tool call support, models that can't even use tools and more nerd stuff, we often had to settle an experience that was not really fun to use since 99% of the time you were just looking at loading spinners waiting for the response.
The new agent experience is now here
Streams tool calls and responses in real time (all providers, all models)
Agents can now real-time download and ingest files from the web (eg: link to PDF, excel, csv). Anything you would use a document can be read in real time by the agent from the web.
Upcoming:
- Agent real-time API calling without agent flows
- Agent image understanding
- Agent system prompt passthrough + user context awareness
- Realtime file searching cross-platform default skill
Notable Improvements: 🚀
- All models and providers now support agentic streaming
- Microsoft Foundry Local integration
- Ephemerally scrape/download any web-resource via agent or uploader
What's Changed
- Allow default users to reorder workspaces by @shatfield4 in #4292
- Export image support for JSON and JSONL by @shatfield4 in #4359
- Fix: missing edit icon for prompts by @17ColinMiPerry in #4344
- feat(i18n): add missing Portuguese (Brazil) translations by @beckeryuri in #4328
- feat: Implement CometAPI integration for chat completions and model m… by @TensorNull in #4379
- Resize chat textarea on paste by @shatfield4 in #4369
- update save file agent text by @timothycarambat in #4389
- Added metadata parameter to document/upload, document/upload/{folderName}, and document/upload-link by @jstawski in #4342
- Add support for
SIMPLE_SSO_NO_LOGIN_REDIRECT
config setting by @timothycarambat in #4394 - patch folder name GET request response by @timothycarambat in #4395
- Add User-Agent header on the requests sent by Generic OpenAI providers. by @angelplusultra in #4393
- Report sources in API responses on finalized chunk by @timothycarambat in #4396
- Allow user to specify args for chromium process so they dont need SYS_ADMIN on container. by @timothycarambat in #4397
- API request delay for Generic OpenAI embedding engine by @chaserhkj in #4317
- Enhanced Chat Embed History View by @MateKristof in #4281
- Ignore hasOwnProperty linting errors by @shatfield4 in #4406
- Migrate OpenAI LLM provider to use Responses API by @shatfield4 in #4404
- Update the timeout value on all stream-timeout providers: by @timothycarambat in #4412
- [BUGFIX] Update Dell Pro AI Studio Default URL by @spencerbull in #4433
- Add PostgreSQL vector extension in createTableIfNotExists function by @angelplusultra in #4430
- fix: resolve Firefox search icon overlapping placeholder text by @naaa760 in #4390
- Refactor Class Name Logging by @angelplusultra in #4426
- Change incorrect notation of Weaviate to PG Vector in env.example by @angelplusultra in #4439
- Enable custom HTTP response timeout for ollama by @timothycarambat in #4448
- fix: youtube transcript collector not work well with non en or non asr caption by @AoiYamada in #4442
- Add HTTP request/response logging middleware for development mode by @angelplusultra in #4425
- Sanitize Metadata Before PG Vector Database Insertion by @angelplusultra in #4434
- New Default System Prompt Variables (User ID, Workspace ID, & Workspace Name) by @angelplusultra in #4414
- Apply renderer from chat widget history to workspace chats by @timothycarambat in #4456
- Patch OpenAI metrics by @timothycarambat in #4458
- fix(uiux): correct typo in System Prompt description text by @vansh2408 in #4461
- Enable real-time agent tool call streaming for all providers by @timothycarambat in #4279
- Add stream options to Gemini LLM for usage tracking by @angelplusultra in #4466
- Fetch, Parse, and Create Documents for Statically Hosted Files by @angelplusultra in #4398
- Migrate OpenAI Agent to use ResponsesAPI by @timothycarambat in #4467
- Microsoft Foundry Local LLM provider & agent provider by @shatfield4 in #4435
- Model context limit auto-detection for LM Studio and Ollama LLM Providers by @shatfield4 in #4468
- Sync models from remote for FireworksAI by @timothycarambat in #4475
- Render html optional by @timothycarambat in #4478
- Adding AnythingLLM Helm Chart by @sculley in #4484
- Reimplement Cohere models for basic chat by @timothycarambat in #4489
- Tooltips for workspace and threads by @timothycarambat in #4500
- Improve URL handler for collector processes by @timothycarambat in #4504
- Migrate gemini agents away from
Untooled
by @timothycarambat in #4505 - Update .gitignore by @jaynedoezy-web in #4507
- refactor: change naming - contextwarpper to authprovider #4510 by @Guru6163 in #4511
- fix label for chunk length setting by @timothycarambat in #4515
- Fix: File pulling fails with uppercase URL characters by @angelplusultra in #4516
New Contributors
- @beckeryuri made their first contribution in #4328
- @TensorNull made their first contribution in #4379
- @jstawski made their first contribution in #4342
- @angelplusultra made their first contribution in #4393
- @chaserhkj made their first contribution in #4317
- @MateKristof made their first contribution in #4281
- @spencerbull made their first contribution in #4433
- @AoiYamada made their first contribution in #4442
- @vansh2408 made their first contribution in #4461
- @sculley made their first contribution in #4484
- @jaynedoezy-web made their first contribution in #4507
- @Guru6163 made their first contribution in #4511
Full Changelog: v1.8.5...v1.9.0