Granite Speech
This model was released on 2025-04-16 and added to Hugging Face Transformers on 2025-04-11.
Granite Speech
Section titled “Granite Speech”
Overview
Section titled “Overview”The Granite Speech model (blog post) is a multimodal language model, consisting of a speech encoder, speech projector, large language model, and LoRA adapter(s). More details regarding each component for the current (Granite 3.2 Speech) model architecture may be found below.
-
Speech Encoder: A Conformer encoder trained with Connectionist Temporal Classification (CTC) on character-level targets on ASR corpora. The encoder uses block-attention and self-conditioned CTC from the middle layer.
-
Speech Projector: A query transformer (q-former) operating on the outputs of the last encoder block. The encoder and projector temporally downsample the audio features to be merged into the multimodal embeddings to be processed by the llm.
-
Large Language Model: The Granite Speech model leverages Granite LLMs, which were originally proposed in this paper.
-
LoRA adapter(s): The Granite Speech model contains a modality specific LoRA, which will be enabled when audio features are provided, and disabled otherwise.
Note that most of the aforementioned components are implemented generically to enable compatibility and potential integration with other model architectures in transformers.
This model was contributed by Alexander Brooks, Avihu Dekel, and George Saon.
Usage tips
Section titled “Usage tips”- This model bundles its own LoRA adapter, which will be automatically loaded and enabled/disabled as needed during inference calls. Be sure to install PEFT to ensure the LoRA is correctly applied!
- The model expects 16kHz sampling rate audio. The processor will automatically resample if needed.
- The LoRA adapter is automatically enabled when audio features are present and disabled for text-only inputs, so you don’t need to manage it manually.
Usage example
Section titled “Usage example”Granite Speech is a multimodal speech-to-text model that can transcribe audio and respond to text prompts. Here’s how to use it:
Basic Speech Transcription
Section titled “Basic Speech Transcription”from transformers import GraniteSpeechForConditionalGeneration, GraniteSpeechProcessorfrom datasets import load_dataset, Audioimport torch
# Load model and processormodel = GraniteSpeechForConditionalGeneration.from_pretrained( "ibm-granite/granite-3.2-8b-speech", torch_dtype=torch.bfloat16, device_map="auto")processor = GraniteSpeechProcessor.from_pretrained("ibm-granite/granite-3.2-8b-speech")
# Load audio from dataset (16kHz sampling rate required)ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")ds = ds.cast_column("audio", Audio(sampling_rate=processor.feature_extractor.sampling_rate))audio = ds['audio'][0]['array']
# Process audioinputs = processor(audio=audio, return_tensors="pt").to(model.device)
# Generate transcriptiongenerated_ids = model.generate(**inputs, max_new_tokens=256)transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]print(transcription)Speech-to-Text with Chat Template
Section titled “Speech-to-Text with Chat Template”For instruction-following with audio, use the chat template with audio directly in the conversation format:
from transformers import GraniteSpeechForConditionalGeneration, GraniteSpeechProcessorfrom datasets import load_dataset, Audioimport torch
model = GraniteSpeechForConditionalGeneration.from_pretrained( "ibm-granite/granite-3.2-8b-speech", torch_dtype=torch.bfloat16, device_map="auto")processor = GraniteSpeechProcessor.from_pretrained("ibm-granite/granite-3.2-8b-speech")
# Load audio from datasetds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")ds = ds.cast_column("audio", Audio(sampling_rate=processor.feature_extractor.sampling_rate))audio = ds['audio'][0]
# Prepare conversation with audio and textconversation = [ { "role": "user", "content": [ {"type": "audio", "audio": audio}, {"type": "text", "text": "Transcribe the following audio:"}, ], }]
# Apply chat template with audio - processor handles both tokenization and audio processinginputs = processor.apply_chat_template(conversation, tokenize=True, return_tensors="pt").to(model.device)
# Generate transcriptiongenerated_ids = model.generate(**inputs, max_new_tokens=512)output_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]print(output_text)Batch Processing
Section titled “Batch Processing”Process multiple audio files efficiently:
from transformers import GraniteSpeechForConditionalGeneration, GraniteSpeechProcessorfrom datasets import load_dataset, Audioimport torch
model = GraniteSpeechForConditionalGeneration.from_pretrained( "ibm-granite/granite-3.2-8b-speech", torch_dtype=torch.bfloat16, device_map="auto")processor = GraniteSpeechProcessor.from_pretrained("ibm-granite/granite-3.2-8b-speech")
# Load multiple audio samples from datasetds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")ds = ds.cast_column("audio", Audio(sampling_rate=processor.feature_extractor.sampling_rate))audio_samples = [ds['audio'][i]['array'] for i in range(3)]
# Process batchinputs = processor(audio=audio_samples, return_tensors="pt", padding=True).to(model.device)
# Generate for all inputsgenerated_ids = model.generate(**inputs, max_new_tokens=256)transcriptions = processor.batch_decode(generated_ids, skip_special_tokens=True)
for i, transcription in enumerate(transcriptions): print(f"Audio {i+1}: {transcription}")GraniteSpeechConfig
Section titled “GraniteSpeechConfig”[[autodoc]] GraniteSpeechConfig
GraniteSpeechEncoderConfig
Section titled “GraniteSpeechEncoderConfig”[[autodoc]] GraniteSpeechEncoderConfig
GraniteSpeechProcessor
Section titled “GraniteSpeechProcessor”[[autodoc]] GraniteSpeechProcessor
GraniteSpeechFeatureExtractor
Section titled “GraniteSpeechFeatureExtractor”[[autodoc]] GraniteSpeechFeatureExtractor
GraniteSpeechForConditionalGeneration
Section titled “GraniteSpeechForConditionalGeneration”[[autodoc]] GraniteSpeechForConditionalGeneration - forward