DeBERTa-v2
This model was released on 2020-06-05 and added to Hugging Face Transformers on 2021-02-19.
DeBERTa-v2
Section titled “DeBERTa-v2”DeBERTa-v2 improves on the original DeBERTa architecture by using a SentencePiece-based tokenizer and a new vocabulary size of 128K. It also adds an additional convolutional layer within the first transformer layer to better learn local dependencies of input tokens. Finally, the position projection and content projection matrices are shared in the attention layer to reduce the number of parameters.
You can find all the original [DeBERTa-v2] checkpoints under the Microsoft organization.
Click on the DeBERTa-v2 models in the right sidebar for more examples of how to apply DeBERTa-v2 to different language tasks.
The example below demonstrates how to classify text with Pipeline or the AutoModel class.
import torchfrom transformers import pipeline
pipeline = pipeline( task="text-classification", model="microsoft/deberta-v2-xlarge-mnli", device=0, dtype=torch.float16)result = pipeline("DeBERTa-v2 is great at understanding context!")print(result)import torchfrom transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained( "microsoft/deberta-v2-xlarge-mnli")model = AutoModelForSequenceClassification.from_pretrained( "microsoft/deberta-v2-xlarge-mnli", dtype=torch.float16, device_map="auto")
inputs = tokenizer("DeBERTa-v2 is great at understanding context!", return_tensors="pt").to(model.device)outputs = model(**inputs)
logits = outputs.logitspredicted_class_id = logits.argmax().item()predicted_label = model.config.id2label[predicted_class_id]print(f"Predicted label: {predicted_label}")echo -e "DeBERTa-v2 is great at understanding context!" | transformers run --task fill-mask --model microsoft/deberta-v2-xlarge-mnli --device 0Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses bitsandbytes quantization to only quantize the weights to 4-bit.
from transformers import AutoModelForSequenceClassification, AutoTokenizer, BitsAndBytesConfig
model_id = "microsoft/deberta-v2-xlarge-mnli"quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype="float16", bnb_4bit_use_double_quant=True,)tokenizer = AutoTokenizer.from_pretrained(model_id)model = AutoModelForSequenceClassification.from_pretrained( model_id, quantization_config=quantization_config, dtype="float16")
inputs = tokenizer("DeBERTa-v2 is great at understanding context!", return_tensors="pt").to(model.device)outputs = model(**inputs)logits = outputs.logitspredicted_class_id = logits.argmax().item()predicted_label = model.config.id2label[predicted_class_id]print(f"Predicted label: {predicted_label}")DebertaV2Config
Section titled “DebertaV2Config”[[autodoc]] DebertaV2Config
DebertaV2Tokenizer
Section titled “DebertaV2Tokenizer”[[autodoc]] DebertaV2Tokenizer - get_special_tokens_mask - save_vocabulary
DebertaV2TokenizerFast
Section titled “DebertaV2TokenizerFast”[[autodoc]] DebertaV2TokenizerFast
DebertaV2Model
Section titled “DebertaV2Model”[[autodoc]] DebertaV2Model - forward
DebertaV2PreTrainedModel
Section titled “DebertaV2PreTrainedModel”[[autodoc]] DebertaV2PreTrainedModel - forward
DebertaV2ForMaskedLM
Section titled “DebertaV2ForMaskedLM”[[autodoc]] DebertaV2ForMaskedLM - forward
DebertaV2ForSequenceClassification
Section titled “DebertaV2ForSequenceClassification”[[autodoc]] DebertaV2ForSequenceClassification - forward
DebertaV2ForTokenClassification
Section titled “DebertaV2ForTokenClassification”[[autodoc]] DebertaV2ForTokenClassification - forward
DebertaV2ForQuestionAnswering
Section titled “DebertaV2ForQuestionAnswering”[[autodoc]] DebertaV2ForQuestionAnswering - forward
DebertaV2ForMultipleChoice
Section titled “DebertaV2ForMultipleChoice”[[autodoc]] DebertaV2ForMultipleChoice - forward