This version adds a new translation provider, Local Server. It is intended for use with locally hosted AI models, e.g. LM Studio.
Do not expect great results using locally hosted models, the small, quantized models you can find on hugging face and run on consumer hardware are much less capable and prone to errors than the large models hosted by OpenAI, Google etc. They are also comparatively slow. Please report your results, good and bad, in the discussions section to help the community figure out what is possible and what to avoid.
The provider uses the httpx library to make requests so it has no additional dependencies. You must specify the server's address (e.g. http://localhost:1234
) and the endpoint to use (e.g. /v1/chat/completions
). If the endpoint offers a "chat" style interface you should enable "Supports Conversation", and if it allows instructions to be sent as a "system" user you should enable "Support System Messages". Otherwise it is assumed to be a completion endpoint and the prompt will be constructed as a script that needs completing.
The prompt can be customised using a template, which may be useful if the model is trained to expect a specific format. The options are limited though, and you will need to modify code if you need to interface with a model that has more specific requirements.
Although the provider is intended to be used with locally hosted models it will work with any server that offers an OpenAI compatible endpoint, including OpenAI's own. Optional parameters for an API key and model are provided in case they are needed.