github ggml-org/llama.cpp b8766

6 hours ago
Details

mtmd: add Gemma 4 audio conformer encoder support (#21421)

  • mtmd: add Gemma 4 audio conformer encoder support

Add audio processing for Gemma 4 E2B/E4B via a USM-style Conformer.

Architecture:

  • 12-layer Conformer: FFN → Self-Attention → Causal Conv1D → FFN → Norm
  • Subsampling Conv Projection: 2x Conv2D(stride=2) with LayerNorm
  • Full self-attention with sinusoidal RPE and sliding window mask (24)
  • Logit softcapping at 50.0, ClippableLinear clamping
  • Output: 1024 → 1536 → RMSNorm → multimodal embedder

Mel preprocessing (dedicated mtmd_audio_preprocessor_gemma4a):

  • HTK mel scale, 128 bins, magnitude STFT, mel_floor=1e-3
  • Standard periodic Hann window (320 samples), zero-padded to FFT size
  • Semicausal left-padding (frame_length/2 samples)
  • Frame count matched to PyTorch (unfold formula)
  • No pre-emphasis, no Whisper-style normalization
  • Mel cosine similarity vs PyTorch: 0.9998

Key fixes:

  • Tensor loading dedup: prevent get_tensor() from creating duplicate
    entries in ctx_data. Fixed with std::set guard.
  • ClippableLinear clamp_info loading moved after per-layer tensors.
  • Sliding window mask (24 positions) matching PyTorch context_size.
  • Skip Whisper normalization for Gemma4 mel output.

Tested on E2B and E4B with CPU and Vulkan backends.
Transcribes: "Glad to see things are going well and business is starting
to pick up" (matching ground truth).

Ref: #21325

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.