Major Changes
-
d964901: - remove setting temperature to
0
by default- remove
null
option fromDefaultSettingsMiddleware
- remove setting defaults for
temperature
andstopSequences
inai
to enable middleware changes
- remove
-
0560977: chore (ai): improve consistency of generate text result, stream text result, and step result
-
516be5b: ### Move Image Model Settings into generate options
Image Models no longer have settings. Instead,
maxImagesPerCall
can be passed directly togenerateImage()
. All other image settings can be passed toproviderOptions[provider]
.Before
await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10, });
After
await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 5 }, }, });
Pull Request: #6180
-
bfbfc4c: feat (ai): streamText/generateText: totalUsage contains usage for all steps. usage is for a single step.
-
ea7a7c9: feat (ui): UI message metadata
-
1409e13: chore (ai): remove experimental continueSteps
Patch Changes
- 66af894: fix (ai): respect content order in toResponseMessages
- Updated dependencies [ea7a7c9]
- @ai-sdk/provider-utils@3.0.0-canary.17