Added support for the gpt-instruct
models, though I'm not currently recommending their usage. The quality of translation is comparable to the chat models- sometimes a little better, sometimes a little worse in my tests. However the instruct models only support the 4K token window of earlier gpt3.5 models (approx. 40 lines per batch) and have a higher per-token cost, so I think that for most users gpt-3.5-turbo-16k
with a maximum batch size of about 100 lines will be more efficient and just as good.
The main purpose of the exercise was to refactor the code base to support different translation clients, which opens the door to supporting other models and platforms.
This release also contains a fix for error handling in the updated OpenAI APIs in the case of connection errors (this may solve some of the crashes people have reported), and closing the connection to the server when stopping or quitting so that it doesn't have to wait for any active requests to complete before it can exit.
What's Changed
- Added support for gpt-instruct models by @machinewrapped in #92
Full Changelog: v0.4.7...v0.5.0