WavLM
This model was released on 2021-10-26 and added to Hugging Face Transformers on 2021-12-16.
Overview
Section titled “Overview”The WavLM model was proposed in WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
The abstract from the paper is the following:
Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisedly and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.
Relevant checkpoints can be found under https://huggingface.co/models?other=wavlm.
This model was contributed by patrickvonplaten. The Authors’ code can be found here.
Usage tips
Section titled “Usage tips”- WavLM is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use
Wav2Vec2Processorfor the feature extraction. - WavLM model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using
Wav2Vec2CTCTokenizer. - WavLM performs especially well on speaker verification, speaker identification, and speaker diarization tasks.
Resources
Section titled “Resources”WavLMConfig
Section titled “WavLMConfig”[[autodoc]] WavLMConfig
WavLMModel
Section titled “WavLMModel”[[autodoc]] WavLMModel - forward
WavLMForCTC
Section titled “WavLMForCTC”[[autodoc]] WavLMForCTC - forward
WavLMForSequenceClassification
Section titled “WavLMForSequenceClassification”[[autodoc]] WavLMForSequenceClassification - forward
WavLMForAudioFrameClassification
Section titled “WavLMForAudioFrameClassification”[[autodoc]] WavLMForAudioFrameClassification - forward
WavLMForXVector
Section titled “WavLMForXVector”[[autodoc]] WavLMForXVector - forward