🎉 Introduction to new functions of GPTCache
- Support the temperature param
from gptcache.adapter import openai
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
temperature = 1.0, # Change temperature here
messages=[{
"role": "user",
"content": question
}],
)
- Add the session layer
from gptcache.adapter import openai
from gptcache.session import Session
session = Session(name="my-session")
question = "what do you think about chatgpt"
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": question}
],
session=session
)
details: https://github.com/zilliztech/GPTCache/tree/main/examples#How-to-run-with-session
- Support config cache with yaml for server
from gptcache.adapter.api import init_similar_cache_from_config
init_similar_cache_from_config(config_dir="cache_config_template.yml")
config file template: https://github.com/zilliztech/GPTCache/blob/main/cache_config_template.yml
- Adapt the dolly model
from gptcache.adapter.dolly import Dolly
llm = Dolly.from_model(model="databricks/dolly-v2-3b")
llm(question)
What's Changed
- Use temperature to control possibility of skip_cache by @jaelgu in #306
- Add template for similar cache init config by @Chiiizzzy in #308
- Add dolly by @junjiejiangjjj in #311
- Add session usage doc by @shiyu22 in #310
- Add docs for temperature by @jaelgu in #312
- Dolly and llama docs by @junjiejiangjjj in #314
- Some minor polish on cache init by @Chiiizzzy in #313
- Update the version to
0.1.21
by @SimFG in #318
Full Changelog: 0.1.20...0.1.21