🎉 Hello world, OpenLLM
OpenLLM version 0.1.1 brings initial support for SOTA LLMs (more to come!!):
| Model | CPU | GPU | Installation | Model Ids | |||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| flan-t5 | ✅ | ✅ |
pip install "openllm[flan-t5]" |
google/flan-t5-small
google/flan-t5-base
google/flan-t5-large
google/flan-t5-xl
google/flan-t5-xxl
dolly-v2
| ✅
| ✅
|
| pip install openllm
| databricks/dolly-v2-3b
databricks/dolly-v2-7b
databricks/dolly-v2-12b
chatglm
| ❌
| ✅
|
| pip install "openllm[chatglm]"
| thudm/chatglm-6b
thudm/chatglm-6b-int8
thudm/chatglm-6b-int4
starcoder
| ❌
| ✅
|
| pip install "openllm[starcoder]"
| bigcode/starcoder
bigcode/starcoderbase
falcon
| ❌
| ✅
|
| pip install "openllm[falcon]"
| tiiuae/falcon-7b
tiiuae/falcon-40b
tiiuae/falcon-7b-instruct
tiiuae/falcon-40b-instruct
stablelm
| ✅
| ✅
|
| pip install openllm
| stabilityai/stablelm-tuned-alpha-3b
stabilityai/stablelm-tuned-alpha-7b
stabilityai/stablelm-base-alpha-3b
stabilityai/stablelm-base-alpha-7b
|
Quickly startup falcon locally, with openllm start falcon:
openllm start falconEasily bundle this LLM into Bento, a portable format that can be easily deployed everywhere:
openllm build falconRefers to the README.md for more details
Installation
pip install openllm==0.1.1To upgrade from a previous version, use the following command:
pip install --upgrade openllm==0.1.1Usage
All available models: python -m openllm.models
To start a LLM: python -m openllm start dolly-v2
Full Changelog: v0.1.0...v0.1.1