UniSpeech-SAT
This model was released on 2021-10-12 and added to Hugging Face Transformers on 2021-10-26.
UniSpeech-SAT
Section titled “UniSpeech-SAT”
Overview
Section titled “Overview”The UniSpeech-SAT model was proposed in UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu .
The abstract from the paper is the following:
Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisedly and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks.
This model was contributed by patrickvonplaten. The Authors’ code can be found here.
Usage tips
Section titled “Usage tips”- UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Please use
Wav2Vec2Processorfor the feature extraction. - UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be
decoded using
Wav2Vec2CTCTokenizer. - UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks.
Resources
Section titled “Resources”UniSpeechSatConfig
Section titled “UniSpeechSatConfig”[[autodoc]] UniSpeechSatConfig
UniSpeechSat specific outputs
Section titled “UniSpeechSat specific outputs”[[autodoc]] models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput
UniSpeechSatModel
Section titled “UniSpeechSatModel”[[autodoc]] UniSpeechSatModel - forward
UniSpeechSatForCTC
Section titled “UniSpeechSatForCTC”[[autodoc]] UniSpeechSatForCTC - forward
UniSpeechSatForSequenceClassification
Section titled “UniSpeechSatForSequenceClassification”[[autodoc]] UniSpeechSatForSequenceClassification - forward
UniSpeechSatForAudioFrameClassification
Section titled “UniSpeechSatForAudioFrameClassification”[[autodoc]] UniSpeechSatForAudioFrameClassification - forward
UniSpeechSatForXVector
Section titled “UniSpeechSatForXVector”[[autodoc]] UniSpeechSatForXVector - forward
UniSpeechSatForPreTraining
Section titled “UniSpeechSatForPreTraining”[[autodoc]] UniSpeechSatForPreTraining - forward