We now support local inference with Ollama! Ollama lets you run various open-source LLMs locally, in a Local API. To use local inference, you need to have Ollama installed.
If you're interested, learn how to get started here! https://github.com/XInTheDark/raycast-g4f/wiki/Help-page:-Configure-Local-APIs#getting-started-ollama
In this update:
- Added the "Ollama Local API" provider.
- The "Configure GPT4Free Local API" command has been renamed to "Configure Local APIs", and it now manages the settings for both G4F and Ollama APIs.
What's Changed
- Fix Blackbox formatting by @XInTheDark in #72
- Add Ollama Local API support by @XInTheDark in #74
Full Changelog: v2.4...v2.5