Patch Changes
-
#71
1f20509
Thanks @omeraplak! - feat: Standardize Agent Error and Finish HandlingThis change introduces a more robust and consistent way errors and successful finishes are handled across the
@voltagent/core
Agent and LLM provider implementations (like@voltagent/vercel-ai
).Key Improvements:
-
Standardized Errors (
VoltAgentError
):- Introduced
VoltAgentError
,ToolErrorInfo
, andStreamOnErrorCallback
types in@voltagent/core
. - LLM Providers (e.g., Vercel) now wrap underlying SDK/API errors into a structured
VoltAgentError
before passing them toonError
callbacks or throwing them. - Agent methods (
generateText
,streamText
,generateObject
,streamObject
) now consistently handleVoltAgentError
, enabling richer context (stage, code, tool details) in history events and logs.
- Introduced
-
Standardized Stream Finish Results:
- Introduced
StreamTextFinishResult
,StreamTextOnFinishCallback
,StreamObjectFinishResult
, andStreamObjectOnFinishCallback
types in@voltagent/core
. - LLM Providers (e.g., Vercel) now construct these standardized result objects upon successful stream completion.
- Agent streaming methods (
streamText
,streamObject
) now receive these standardized results in theironFinish
handlers, ensuring consistent access to final output (text
orobject
),usage
,finishReason
, etc., for history, events, and hooks.
- Introduced
-
Updated Interfaces: The
LLMProvider
interface and related options types (StreamTextOptions
,StreamObjectOptions
) have been updated to reflect these new standardized callback types and error-throwing expectations.
These changes lead to more predictable behavior, improved debugging capabilities through structured errors, and a more consistent experience when working with different LLM providers.
-
-
Updated dependencies [
1f20509
,1f20509
,7a7a0f6
]:- @voltagent/core@0.1.9