Features
- Modular, generator-based foundation (rewrote entire codebase)
- Significantly easier to build Open Interpreter into your applications via
interpreter.chat(message)
(see JARVIS for example implementation) - Run
interpreter --config
to configureinterpreter
to run with any settings by default (set your default language model, system message, etc) - Run
interpreter --conversations
to resume conversations - Budget manager (thank you LiteLLM!) via
interpreter --max_budget 0.1
(sets max budget per session in USD) - Change the system message, temperature, max_tokens, etc. from the command line
- Central
/conversations
folder for persistent memory - New hosted language models (thank you LiteLLM!) like Claude, Google PaLM, Cohere, and more.
What's Changed
- Fix typo 'recieved'> 'received' by @merlinfrombelgium in #361
- Pull request template created by @TanmayDoesAI in #365
- docs: move pr template to .github folder by @jordanbtucker in #373
- chore: enhance .gitignore by @jordanbtucker in #374
- chore: add vscode debug support by @jordanbtucker in #375
- discard the / as command as it will block the Mac/Linux to load the file by @moming2k in #378
- Update interpreter.py for a typo error by @YUFEIFUT in #397
- Translated Open Interpreter README into Hindi by @zeelsheladiya in #417
- Add models to pull request template by @mak448a in #423
- Retry connecting to openai after hitting rate limit to fix #442 by @mathiasrw in #452
- Handle %load_message failure in interpreter.py by @richawo in #431
- add budget manager for api calls by @krrishdholakia in #316
- The Generator Update by @KillianLucas in #482
New Contributors
- @YUFEIFUT made their first contribution in #397
- @zeelsheladiya made their first contribution in #417
- @mak448a made their first contribution in #423
- @mathiasrw made their first contribution in #452
- @richawo made their first contribution in #431
- @krrishdholakia made their first contribution in #316
Full Changelog: v0.1.4...v0.1.5