github huggingface/optimum-intel v1.14.0
v1.14.0: IPEX models

latest releases: v1.19.0, v1.18.3, push...
8 months ago

IPEX models

from optimum.intel import IPEXModelForCausalLM
from transformers import AutoTokenizer, pipeline

model_id = "Intel/q8_starcoder"
model = IPEXModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
results = pipe("He's a dreadful magician and")

Fixes

  • Fix position_ids initialization for first inference of stateful models by @eaidova in #532
  • Relax requirements to have registered normalized config for decoder models #537 by @eaidova in #537

Don't miss a new optimum-intel release

NewReleases is sending notifications on new releases.