github pipecat-ai/pipecat v0.0.75

latest releases: v0.0.83, v0.0.82, v0.0.81...
one month ago

Added

  • Added an aggregate_sentences arg in CartesiaTTSService,
    ElevenLabsTTSService, NeuphonicTTSService and RimeTTSService, where the
    default value is True. When aggregate_sentences is True, the TTSService
    aggregates the LLM streamed tokens into sentences by default. Note: setting
    the value to False requires a custom processor before the TTSService to
    aggregate LLM tokens.

  • Added kwargs to the OLLamaLLMService to allow for configuration args to
    be passed to Ollama.

  • Added call hang-up error handling in TwilioFrameSerializer, which handles
    the case where the user has hung up before the TwilioFrameSerializer hangs
    up the call.

Changed

  • Updated RTVIObserver and RTVIProcessor to match the new RTVI 1.0.0 protocol.
    This includes:

    • Deprecating support for all messages related to service configuration and
      actions.
    • Adding support for obtaining and logging data about client, including its
      RTVI version and optionally included system information (OS/browser/etc.)
    • Adding support for handling the new client-message RTVI message through
      either a on_client_message event handler or listening for a new
      RTVIClientMessageFrame
    • Adding support for responding to a client-message with a server-response
      via either a direct call on the RTVIProcessor or via pushing a new
      RTVIServerResponseFrame
    • Adding built-in support for handling the new append-to-context RTVI message
      which allows a client to add to the user or assistant llm context. No extra
      code is required for supporting this behavior.
    • Updating all JavaScript and React client RTVI examples to use versions 1.0.0
      of the clients.

    Get started migrating to RTVI protocol 1.0.0 by following the migration guide:
    https://docs.pipecat.ai/client/migration-guide

  • Refactored AWSBedrockLLMService and AWSPollyTTSService to work
    asynchronously using aioboto3 instead of the boto3 library.

  • The UserIdleProcessor now handles the scenario where function calls take
    longer than the idle timeout duration. This allows you to use the
    UserIdleProcessor in conjunction with function calls that take a while to
    return a result.

Fixed

  • Updated the NeuphonicTTSService to work with the updated websocket API.

  • Fixed an issue with RivaSTTService where the watchdog feature was causing
    an error on initialization.

Performance

  • Remove unnecessary push task in each FrameProcessor.

Don't miss a new pipecat release

NewReleases is sending notifications on new releases.