1.0.0-beta.8 (2023-09-21)
Features Added
- Audio Transcription and Audio Translation using OpenAI Whisper models is now supported. See OpenAI's API
reference or the Azure OpenAI
quickstart for detailed overview and
background information.- The new methods
GetAudioTranscription
andGetAudioTranscription
expose these capabilities onOpenAIClient
- Transcription produces text in the primary, supported, spoken input language of the audio data provided, together
with any optional associated metadata - Translation produces text, translated to English and reflective of the audio data provided, together with any
optional associated metadata - These methods work for both Azure OpenAI and non-Azure
api.openai.com
client configurations
- The new methods
Breaking Changes
- The underlying representation of
PromptFilterResults
(forCompletions
andChatCompletions
) has had its response
body key changed fromprompt_annotations
toprompt_filter_results
- Prior versions of the
Azure.AI.OpenAI
library may no longer populatePromptFilterResults
as expected and it's
highly recommended to upgrade to this version if the use of Azure OpenAI content moderation annotations for input data
is desired - If a library version upgrade is not immediately possible, it's advised to use
Response<T>.GetRawResponse()
and manually
extract theprompt_filter_results
object from the top level of theCompletions
orChatCompletions
responseContent
payload
Bugs Fixed
- Support for the described breaking change for
PromptFilterResults
was added and this library version will now again
deserializePromptFilterResults
appropriately PromptFilterResults
andContentFilterResults
are now exposed on the result classes for streaming Completions and
Chat Completions.Streaming(Chat)Completions.PromptFilterResults
will report an index-sorted list of all prompt
annotations received so far whileStreaming(Chat)Choice.ContentFilterResults
will reflect the latest-received
content annotations that were populated and received while streaming