Fish Audio S2 — Pre-Release
Best text-to-speech system among both open source and closed source.
Trained on 10M+ hours of audio across ~50 languages, S2 combines a Dual-AR architecture (Qwen3 backbone) with GRPO reinforcement learning alignment to produce natural, emotionally rich speech with fine-grained inline control.
Technical Report · Blog · Model · Playground
Model
| Variant | Params | Codec | Output |
|---|---|---|---|
| S2-Pro | 4B (slow) + 400M (fast) | ModifiedDAC, 10 codebooks, ~21 Hz | 44.1 kHz |
Highlights
- Dual-AR: Slow AR (4B) predicts semantic codebook along time axis; Fast AR (400M) fills 9 residual codebooks per step
- Inline Control: Free-form tags like
[laugh],[whispers],[super happy]at word level - RL Alignment: GRPO with unified data-reward pipeline — same model for data filtering and RL reward
- SGLang Streaming: RTF 0.195, TTFA ~100ms, 3000+ tokens/s on single H200
- 50+ Languages, multi-speaker (
<|speaker:i|>), multi-turn, rapid voice cloning (10-30s reference)
What's Changed
Model & Inference
- New Dual-AR architecture with Qwen3 backbone, replacing Fish-Speech v1.5
- New
ModifiedDACaudio codec (replaces Firefly/VQ-GAN) - Support
fish_qwen3_omnicheckpoint format (sharded safetensors) with backward compatibility - Fixed: torch.compile bugs, GPU memory leak, audio quality issues
Docker & Deployment
- Docker overhaul: multi-target builds, compose support, health checks, non-root user
- SGLang server integration
API & Server
- Reference voice management API (CRUD), multipart upload support
- Various server bug fixes,
/healthendpoint
Finetune
- Full finetune pipeline for S1/S2 (datasets, training, LoRA merge)
Docs & Infra
- README & MkDocs rewritten for S2 across 6 languages
- License updated to Fish Audio Research License
- Removed legacy code (Firefly VQ-GAN, SenseVoice, Fish Agent, old batch files)