Skip to content

RoBERTa

This model was released on 2019-07-26 and added to Hugging Face Transformers on 2020-11-16.

PyTorch SDPA

RoBERTa improves BERT with new pretraining objectives, demonstrating BERT was undertrained and training design is important. The pretraining objectives include dynamic masking, sentence packing, larger batches and a byte-level BPE tokenizer.

You can find all the original RoBERTa checkpoints under the Facebook AI organization.

The example below demonstrates how to predict the <mask> token with Pipeline, AutoModel, and from the command line.

import torch
from transformers import pipeline
pipeline = pipeline(
task="fill-mask",
model="FacebookAI/roberta-base",
dtype=torch.float16,
device=0
)
pipeline("Plants create <mask> through a process known as photosynthesis.")
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"FacebookAI/roberta-base",
)
model = AutoModelForMaskedLM.from_pretrained(
"FacebookAI/roberta-base",
dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
inputs = tokenizer("Plants create <mask> through a process known as photosynthesis.", return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits
masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1]
predicted_token_id = predictions[0, masked_index].argmax(dim=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print(f"The predicted token is: {predicted_token}")
Terminal window
echo -e "Plants create <mask> through a process known as photosynthesis." | transformers run --task fill-mask --model FacebookAI/roberta-base --device 0
  • RoBERTa doesn’t have token_type_ids so you don’t need to indicate which token belongs to which segment. Separate your segments with the separation token tokenizer.sep_token or </s>.

[[autodoc]] RobertaConfig

[[autodoc]] RobertaTokenizer - get_special_tokens_mask - save_vocabulary

[[autodoc]] RobertaTokenizerFast

[[autodoc]] RobertaModel - forward

[[autodoc]] RobertaForCausalLM - forward

[[autodoc]] RobertaForMaskedLM - forward

[[autodoc]] RobertaForSequenceClassification - forward

[[autodoc]] RobertaForMultipleChoice - forward

[[autodoc]] RobertaForTokenClassification - forward

[[autodoc]] RobertaForQuestionAnswering - forward