This release adds support for streaming the response from the translator LLM so that translations are updated line by line rather than waiting for a complete batch and updating in chunks. Streaming is supported for the following providers:
- OpenRouter (all models, in theory)
- OpenAI (thinking models only, including gpt-5)
- Gemini
- Claude
- DeepSeek
Streaming can be enabled or disabled in the provider settings if available (enabled by default).
Additionally this release finally fixes a long-standing issue where the Scenes view would reset every time the translation is updated - obviously unbearable when updates are more or less continuous during translation. It is also more stable when splitting or merging scenes and batches.
If you're wondering why the version number has jumped to 1.5.2 it's because 1.5.0 was the inaugural release of PySubtrans, which packages the core functionality of llm-subtrans as a Python package that can be integrated into your own projects.
It also comes with a new batch translation script that can be used for bulk translation jobs, using any of the supported providers. You can download the script and pip install pysubtrans
to run it, without needing to clone the entire repository.