github ggml-org/llama.cpp b8106

latest releases: b8108, b8107
one hour ago
Details

model : add JAIS-2 architecture support (#19488)

  • model: add JAIS-2 architecture support

Add support for the JAIS-2 family of Arabic-English bilingual models
from Inception AI (https://huggingface.co/inceptionai/Jais-2-8B-Chat).

Architecture characteristics:

  • LayerNorm (not RMSNorm) with biases
  • ReLU² (ReLU squared) activation function
  • Separate Q/K/V projections with biases
  • Simple MLP without gate projection (up -> act -> down)
  • RoPE positional embeddings
  • GPT-2 BPE tokenizer

Supported model sizes:

  • Jais-2-8B (32 layers, 26 heads, 3328 hidden)
  • Jais-2-70B (68 layers, 56 heads, 7168 hidden)

Tested with quantizations: BF16, Q8_0, Q6_K, Q5_K_M, Q5_0, Q4_K_M, Q4_0, Q3_K_M, Q2_K

Note: JAIS-2 requires F32 precision accumulators for numerical stability
and uses standard attention (not flash attention) on CUDA backends.

  • fix: run convert_hf_to_gguf_update.py for jais-2 tokenizer hash

  • fix: use NEOX RoPE type for JAIS2

  • fix: remove Q/K permutation (NEOX RoPE doesn't need it)

  • fix: enable flash attention for JAIS2 (fixed by #19115)

  • fix: add dedicated JAIS2 pre-tokenizer type and control vector support

  • Add LLAMA_VOCAB_PRE_TYPE_JAIS2 with cascading whitespace regex
  • Include original regex from tokenizer.json as comment
  • Add build_cvec call for control vector support
  • no longer necessary to override set_vocab

Co-authored-by: Sigbjørn Skjæret sigbjorn.skjaeret@scala.com

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.