What's Changed
- Fix composite function consuming unnecessary contexts by @jekalmin in #388
- Expose error message in conversation by @jekalmin in #389
- Change default settings to suit reasoning model by @jekalmin in #390
- model:
gpt-4o-mini→gpt-5-miniModel Input Cached input Output gpt-4o-mini $0.15 $0.075 $0.60 gpt-5-mini (flex) $0.125 $0.0125 $1.00 - max tokens:
150→500 - max function calls per conversation:
1→3 - prompt
- functions
- Add
delayproperties inexecute_servicesfunction. - Add
get_attributesfunction
- Add
- service tier:
flex - reasoning effort:
low
- model:
Full Changelog: 3.0.0-beta3...3.0.0-beta4