github livekit/agents livekit-agents@1.3.3

9 hours ago

New Features

Observability

To learn more about the new observability features, check out our full write-up on the LiveKit blog. It walks through how session playback, trace inspection, and synchronized logs streamline debugging for voice agents. Read more here

New CLI

The CLI has been redesigned, and a new text-only mode was added so you can test your agent without using voice.

python3 my_agent.py console --text

You can also now configure both the input device and output device directly through the provided parameters.

python3 my_agent.py console --input-device "AirPods" --output-device "MacBook"

new_cli

New AgentServer API

We’ve renamed Worker to AgentServer, and you now need to use a decorator to define the entrypoint. All existing functionality remains backward compatible. This change lays the groundwork for upcoming design improvements and new features.

server = AgentServer()

def prewarm(proc: JobProcess): ...
def load(proc: JobProcess): ...

server.setup_fnc = prewarm
server.load_fnc = load

@server.rtc_session(agent_name="my_customer_service_agent")
async def entrypoint(ctx: JobContext): ...

Session Report & on_session_end callback

Use the on_session_end callback to generate a structured SessionReport that the conversation history, events, recording metadata, and the agent’s configuration.

server = AgentServer()

async def on_session_end(ctx: JobContext) -> None:
    report = ctx.make_session_report()
    print(json.dumps(report.to_dict(), indent=2))
    
    chat_history = report.chat_history
    # Do post-processing on your session (e.g final evaluations, generate a summary, ...)

@server.rtc_session(on_session_end=on_session_end)
async def my_agent(ctx: JobContext) -> None:
    ...

AgentHandoff item

To capture everything that occurred during your session, we added an AgentHandoff item to the ChatContext.

class AgentHandoff(BaseModel):
    ...
    old_agent_id: str | None
    new_agent_id: str

Improved turn detection model

We updated the turn-detection model, resulting in measurable accuracy improvements across most languages. The table below shows the change in tnr@0.993 between versions 0.4.0 and 0.4.1, along with the percentage difference.

This new version also handles special user inputs such as email addresses, street addresses, and phone numbers much more effectively.

514623611-bb709e00-71ca-4b0e-86c4-fd854dcaf51c

TaskGroup

We added TaskGroup, which lets you run multiple tasks concurrently and wait for all of them to finish. This is useful when collecting several pieces of information from a user where the order doesn’t matter, or when the user may revise earlier inputs while continuing the flow.

We’ve also added an example that uses TaskGroup to build a SurveyAgent, which you can use as a reference.

task_group = TaskGroup()
task_group.add(lambda: GetEmailTask(), id="get_email_task", description="Get the email address")
task_group.add(lambda: GetPhoneNumberTask(), id="phone_number_task", description="Get the phone number")
task_group.add(lambda: GetCreditCardTask(), id="credit_card_task", description="Get credit card")
results = await task_group

IVR systems

Agents can now optionally handle IVR-style interactions. Enabling ivr_detection allows the session to identify and respond appropriately to IVR tones or patterns, and min_endpointing_delay lets you control how long the system waits before ending a turn—useful for menu-style inputs.

session = AgentSession(
    ivr_detection=True,
    min_endpointing_delay=5,
)

llm_node FlushSentinel

We added a FlushSentinel marker that can be yielded from llm_node to flush partial LLM output to TTS and start a new TTS stream. This lets you emit a short, early response (for example, when a specific tool call is detected) while the main LLM response continues in the background. For a concrete pattern, see the flush_llm_node.py example.

async def llm_node(self, chat_ctx: llm.ChatContext, tools: list[llm.FunctionTool], model_settings: ModelSettings) -> AsyncIterable[llm.ChatChunk | FlushSentinel]:
    yield "This is the first sentence"
    yield FlushSentinel()
    yield "Another TTS generation"

What's Changed

New Contributors

Full Changelog: https://github.com/livekit/agents/compare/livekit-agents@1.2.18...livekit-agents@1.3.3

Don't miss a new agents release

NewReleases is sending notifications on new releases.