🎉 Introduction to new functions of GPTCache
- Support the session for the
LangChainLLMs
from langchain import OpenAI
from gptcache.adapter.langchain_models import LangChainLLMs
from gptcache.session import Session
session = Session(name="sqlite-example")
llm = LangChainLLMs(llm=OpenAI(temperature=0), session=session)
- Optimize the summarization context process
from gptcache import cache
from gptcache.processor.context.summarization_context import SummarizationContextProcess
context_process = SummarizationContextProcess()
cache.init(
pre_embedding_func=context_process.pre_process,
)
- Add BabyAGI bootcamp
details: https://github.com/zilliztech/GPTCache/blob/main/docs/bootcamp/langchain/baby_agi.ipynb
What's Changed
- Update langchain llms with session by @shiyu22 in #327
- Wrap gptcache server in a docker image by @Chiiizzzy in #329
- Fix requirements conflict for sphinx by @jaelgu in #330
- Use self-hosted tokenizer and update summarization context. by @wxywb in #331
- Optimize some code by @SimFG in #333
- Add BabyAGI bootcamp by @shiyu22 in #334
- Improve the api for the
import_ruamel
by @SimFG in #336
Full Changelog: 0.1.22...0.1.23