github slackapi/bolt-python v1.26.0

12 hours ago

AI-Enabled Features: Loading States, Text Streaming, and Feedback Buttons

🍿 Preview

2025-10-06-loading-state-text-streaming-feedback.mov

⚡ Getting Started

Try the AI Agent Sample app to explore the AI-enabled features and existing Assistant helper:

# Create a new AI Agent app
$ slack create slack-ai-agent-app --template slack-samples/bolt-python-assistant-template
$ cd slack-ai-agent-app/

# Initialize Python Virtual Environment
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install -r requirements.txt

# Add your OPENAI_API_KEY
$ export OPENAI_API_KEY=sk-proj-ahM...

# Run the local dev server
$ slack run

⌛ Loading States

Loading states allows you to not only set the status (e.g. "My app is typing...") but also sprinkle some personality by cycling through a collection of loading messages.

Bolt Assistant Class usage:

@assistant.user_message
def respond_in_assistant_thread(
    client: WebClient,
    context: BoltContext,
    get_thread_context: GetThreadContext,
    logger: Logger,
    payload: dict,
    say: Say,
    set_status: SetStatus,
):
    set_status(
        status="thinking...",
        loading_messages=[
            "Teaching the hamsters to type faster…",
            "Untangling the internet cables…",
            "Consulting the office goldfish…",
            "Polishing up the response just for you…",
            "Convincing the AI to stop overthinking…",
        ],
    )

Web Client SDK usage:

@app.message()
def handle_message(client, context, event, message):
    client.assistant_threads_setStatus(
        channel_id=channel_id,
        thread_ts=thread_ts,
        status="thinking...",
        loading_messages=[
            "Teaching the hamsters to type faster…",
            "Untangling the internet cables…",
            "Consulting the office goldfish…",
            "Polishing up the response just for you…",
            "Convincing the AI to stop overthinking…",
        ],
    )

    # Start a new message stream

🔮 Text Streaming Helper

The chat_stream() helper utility can be used to streamline calling the 3 text streaming methods:

# Start a new message stream
streamer = client.chat_stream(
    channel=channel_id,
    recipient_team_id=team_id,
    recipient_user_id=user_id,
    thread_ts=thread_ts,
)

# Loop over OpenAI response stream
# https://platform.openai.com/docs/api-reference/responses/create
for event in returned_message:
    if event.type == "response.output_text.delta":
        streamer.append(markdown_text=f"{event.delta}")
    else:
        continue

feedback_block = create_feedback_block()
streamer.stop(blocks=feedback_block)

🔠 Text Streaming Methods

Alternative to the Text Streaming Helper is to call the individual methods.

1) client.chat_startStream

First, start a chat text stream to stream a response to any message:

@app.message()
def handle_message(message, client, context, event, message):
    # Start a new message stream
    stream_response = client.chat_startStream(
        channel=channel_id,
        recipient_team_id=team_id,
        recipient_user_id=user_id,
        thread_ts=thread_ts,
    )
    stream_ts = stream_response["ts"]

2) client.chat_appendStream

After starting a chat text stream, you can then append text to it in chunks (often from your favourite LLM SDK) to convey a streaming effect:

for event in returned_message:
    if event.type == "response.output_text.delta":
        client.chat_appendStream(
            channel=channel_id, 
            ts=stream_ts, 
            markdown_text=f"{event.delta}"
        )
    else:
        continue

3) client.chat_stopStream

Lastly, you can stop the chat text stream to finalize your message:

client.chat_stopStream(
    channel=channel_id, 
    ts=stream_ts,
    blocks=feedback_block
)

👍🏻 Feedback Buttons

Add feedback buttons to the bottom of a message, after stopping a text stream, to gather user feedback:

def create_feedback_block() -> List[Block]:
    blocks: List[Block] = [
        ContextActionsBlock(
            elements=[
                FeedbackButtonsElement(
                    action_id="feedback",
                    positive_button=FeedbackButtonObject(
                        text="Good Response",
                        accessibility_label="Submit positive feedback on this response",
                        value="good-feedback",
                    ),
                    negative_button=FeedbackButtonObject(
                        text="Bad Response",
                        accessibility_label="Submit negative feedback on this response",
                        value="bad-feedback",
                    ),
                )
            ]
        )
    ]
    return blocks

@app.message()
def handle_message(message, client):
    # ... previous streaming code ...
    
    # Stop the stream and add feedback buttons
    feedback_block = create_feedback_block()
    client.chat_stopStream(
        channel=channel_id, 
        ts=stream_ts, 
        blocks=feedback_block
    )

Ⓜ️ Markdown Text Support

chat_postMessage supports markdown_text

response = client.chat_postMessage(
    channel="C111",
    text="hello",
    markdown_text=markdown_content
)

Learn more in slackapi/python-slack-sdk#1718

🧩 Markdown Block

📚 https://docs.slack.dev/reference/block-kit/blocks/markdown-block/

from slack_sdk.models.blocks import MarkdownBlock
...

@app.message("hello")
def message_hello(say):
    say(
        blocks=[
            MarkdownBlock(text="**lets's go!**"),
        ],
        text="let's go!",
    )

Learn more in slackapi/python-slack-sdk#1748

🎞️ Workflows Featured Methods

Add support for the workflows.featured.{add|list|remove|set} methods:

app.client.workflows_featured_add(channel_id="C0123456789", trigger_ids=["Ft0123456789"])
app.client.workflows_featured_list(channel_ids="C0123456789")
app.client.workflows_featured_remove(channel_id="C0123456789", trigger_ids=["Ft0123456789"])
app.client.workflows_featured_set(channel_id="C0123456789", trigger_ids=["Ft0123456789"])

Learn more in slackapi/python-slack-sdk#1712

What's Changed

  • chore(deps): update pytest-cov requirement from <7,>=3 to >=3,<8 by @dependabot[bot] in #1365
  • chore(deps): bump actions/setup-python from 5.6.0 to 6.0.0 by @dependabot[bot] in #1363
  • chore(deps): bump actions/checkout from 4.2.2 to 5.0.0 by @dependabot[bot] in #1362
  • chore(deps): bump actions/stale from 9.1.0 to 10.0.0 by @dependabot[bot] in #1361
  • chore(deps): bump codecov/codecov-action from 5.4.3 to 5.5.1 by @dependabot[bot] in #1364
  • docs: add ai provider token instructions by @zimeg in #1340
  • build: require cheroot<11 with adapter test dependencies by @zimeg in #1375
  • build(deps): remove pytest lower bounds from testing requirements by @zimeg in #1333
  • docs: updates for combined quickstart by @haleychaas in #1378
  • build: install dependencies needed to autogenerate reference docs by @zimeg in #1377
  • ci: post regression notifications if scheduled tests do not succeed by @zimeg in #1376
  • chore(deps): bump mypy from 1.17.1 to 1.18.2 by @dependabot[bot] in #1379
  • docs: replace links from api.slack.com to docs.slack.dev redirects by @zimeg in #1383
  • feat: add ai-enabled features text streaming methods, feedback blocks, and loading state by @zimeg in #1387
  • version 1.26.0 by @zimeg in #1388

Full Changelog: v1.25.0...v1.26.0

Don't miss a new bolt-python release

NewReleases is sending notifications on new releases.