🎉 Introduction to new functions of GPTCache
- Add StableDiffusion adapter (experimental)
import torch
from gptcache.adapter.diffusers import StableDiffusionPipeline
from gptcache.processor.pre import get_prompt
from gptcache import cache
cache.init(
pre_embedding_func=get_prompt,
)
model_id = "stabilityai/stable-diffusion-2-1"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
prompt = "a photo of an astronaut riding a horse on mars"
pipe(prompt=prompt).images[0]
-
Add speech to text bootcamp, link
-
More convenient management of cache files
from gptcache.manager.factory import manager_factory
data_manager = manager_factory('sqlite,faiss', data_dir="test_cache", vector_params={"dimension": 5})
- Add a simple GPTCache server (experimental)
After starting this server, you can:
- put the data to cache, like:
curl -X PUT -d "receive a hello message" "http://localhost:8000?prompt=hello"
- get the data from cache, like:
curl -X GET "http://localhost:8000?prompt=hello"
Currently the service is just a map cache, more functions are still under development.
What's Changed
- Adapt StableDiffusion by @jaelgu in #234
- Add audio embedding with data2vec by @jaelgu in #238
- Support multi-model question by @junjiejiangjjj in #235
- Add speech to text bootcamp by @shiyu22 in #239
- Fix auto API references script for gptcache.adapter by @jaelgu in #240
- Update README with multimodal adapter in modules by @jaelgu in #242
- Add manager_factory to create data_manager by @junjiejiangjjj in #241
- Add a simple GPTCache server by @SimFG in #244
Full Changelog: 0.1.15...0.1.16