Summary
- Amazon Data Firehose — full emulation of all 12 API operations (CreateDeliveryStream,
DeleteDeliveryStream, DescribeDeliveryStream, ListDeliveryStreams, PutRecord, PutRecordBatch,
UpdateDestination, TagDeliveryStream, UntagDeliveryStream, ListTagsForDeliveryStream,
StartDeliveryStreamEncryption, StopDeliveryStreamEncryption). S3 destinations write records synchronously
to the local S3 emulator. AWS-compliant response shapes including EncryptionConfiguration defaults,
Source block for KinesisStreamAsSource, and UpdateDestination field merging. Credential scope
kinesis-firehose, target prefix Firehose_20150804. - Virtual-hosted style S3 — {bucket}.localhost[:{port}] host header routing rewrites requests to
path-style and forwards to the S3 handler. Compatible with AWS SDK virtual-hosted endpoint configuration. - DynamoDB OR/AND expression fix — Python's boolean short-circuit was skipping right-hand token
consumption in the recursive-descent parser when the left operand was already truthy/falsy. This caused
Invalid expression: Expected RPAREN, got NAME_REF on expressions like attribute_not_exists(#0) OR #1 <=
:0 — reported by PynamoDB users with numeric ExpressionAttributeNames keys on composite key tables.
Test plan
- 450 integration tests collected, 449 pass (1 pre-existing ECS timing flap unrelated to this PR)
- All 16 Firehose tests pass against Docker image
- DynamoDB fix verified by unit test and regression test
- Virtual-hosted S3 routing verified via Docker image
Notes
- Firehose matches or exceeds LocalStack Community free tier: fixes 4 known LocalStack bugs
(ExtendedS3+PutRecord, KinesisStreamAsSource creation, HttpEndpoint description not returned,
Elasticsearch KeyError) - Shared gaps with LocalStack Community (intentional for local dev): no Lambda record transformation, no
dynamic S3 partitioning, non-S3 destinations buffer in-memory