Added
-
Added audio filter
KrispVivaFilter
using the Krisp VIVA SDK. -
Added
--folder
argument to the runner, allowing files saved in that folder to be downloaded fromhttp://HOST:PORT/file/FILE
. -
Added
GeminiLiveVertexLLMService
, for accessing Gemini Live via Google Vertex AI. -
Added some new configuration options to
GeminiLiveLLMService
:thinking
enable_affective_dialog
proactivity
Note that these new configuration options require using a newer model than the default, like "gemini-2.5-flash-native-audio-preview-09-2025". The last two require specifying
http_options=HttpOptions(api_version="v1alpha")
. -
Added
on_pipeline_error
event toPipelineTask
. This event will get fired when anErrorFrame
is pushed (useFrameProcessor.push_error()
).@task.event_handler("on_pipeline_error") async def on_pipeline_error(task: PipelineTask, frame: ErrorFrame): ...
-
Added a
service_tier
InputParam
to theBaseOpenAILLMService
. This parameter can influence the latency of the response. For example"priority"
will result in faster completions, but in exchange for a higher price.
Changed
- Updated
GeminiLiveLLMService
to use thegoogle-genai
library rather than use WebSockets directly.
Deprecated
-
LivekitFrameSerializer
is now deprecated. UseLiveKitTransport
instead. -
pipecat.service.openai_realtime
is now deprecated, usepipecat.services.openai.realtime
instead orpipecat.services.azure.realtime
for Azure Realtime. -
pipecat.service.aws_nova_sonic
is now deprecated, usepipecat.services.aws.nova_sonic
instead. -
GeminiMultimodalLiveLLMService
is now deprecated, useGeminiLiveLLMService
.
Fixed
-
Fixed a
GoogleVertexLLMService
issue that would generate an error if no token information was returned. -
GeminiLiveLLMService
will now end gracefully (i.e. after the bot has finished) upon receiving anEndFrame
. -
GeminiLiveLLMService
will try to seamlessly reconnect when it loses its connection.