By default, ShellGPT leverages OpenAI's large language models. However, this release provides the flexibility to use locally hosted models, which can be a cost-effective alternative. To use local models, you will need to run your own API server. You can accomplish this by using LocalAI, a self-hosted, OpenAI-compatible API. Setting up LocalAI allows you to run language models on your own hardware, potentially without the need for an internet connection, depending on your usage. To set up your LocalAI, please follow this comprehensive guide. Remember that the performance of your local models may depend on the specifications of your hardware and the specific language model you choose to deploy.
--model
parameter is now string (was enum before).- Added LocalAI information to README.md.
- Created a guide on wiki page.