BART
This model was released on 2019-10-29 and added to Hugging Face Transformers on 2020-11-16.
BART is a sequence-to-sequence model that combines the pretraining objectives from BERT and GPT. It’s pretrained by corrupting text in different ways like deleting words, shuffling sentences, or masking tokens and learning how to fix it. The encoder encodes the corrupted document and the corrupted text is fixed by the decoder. As it learns to recover the original text, BART gets really good at both understanding and generating language.
You can find all the original BART checkpoints under the AI at Meta organization.
The example below demonstrates how to predict the [MASK] token with Pipeline, AutoModel, and from the command line.
import torchfrom transformers import pipeline
pipeline = pipeline( task="fill-mask", model="facebook/bart-large", dtype=torch.float16, device=0)pipeline("Plants create <mask> through a process known as photosynthesis.")import torchfrom transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained( "facebook/bart-large",)model = AutoModelForMaskedLM.from_pretrained( "facebook/bart-large", dtype=torch.float16, device_map="auto", attn_implementation="sdpa")inputs = tokenizer("Plants create <mask> through a process known as photosynthesis.", return_tensors="pt").to(model.device)
with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits
masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1]predicted_token_id = predictions[0, masked_index].argmax(dim=-1)predicted_token = tokenizer.decode(predicted_token_id)
print(f"The predicted token is: {predicted_token}")echo -e "Plants create <mask> through a process known as photosynthesis." | transformers run --task fill-mask --model facebook/bart-large --device 0- Inputs should be padded on the right because BERT uses absolute position embeddings.
- The facebook/bart-large-cnn checkpoint doesn’t include
mask_token_idwhich means it can’t perform mask-filling tasks. - BART doesn’t use
token_type_idsfor sequence classification. UseBartTokenizerorencodeto get the proper splitting. - The forward pass of
BartModelcreates thedecoder_input_idsif they’re not passed. This can be different from other model APIs, but it is a useful feature for mask-filling tasks. - Model predictions are intended to be identical to the original implementation when
forced_bos_token_id=0. This only works if the text passed tofairseq.encodebegins with a space. generateshould be used for conditional generation tasks like summarization.
BartConfig
Section titled “BartConfig”[[autodoc]] BartConfig - all
BartTokenizer
Section titled “BartTokenizer”[[autodoc]] BartTokenizer - all
BartTokenizerFast
Section titled “BartTokenizerFast”[[autodoc]] BartTokenizerFast - all
BartModel
Section titled “BartModel”[[autodoc]] BartModel - forward
BartForConditionalGeneration
Section titled “BartForConditionalGeneration”[[autodoc]] BartForConditionalGeneration - forward
BartForSequenceClassification
Section titled “BartForSequenceClassification”[[autodoc]] BartForSequenceClassification - forward
BartForQuestionAnswering
Section titled “BartForQuestionAnswering”[[autodoc]] BartForQuestionAnswering - forward
BartForCausalLM
Section titled “BartForCausalLM”[[autodoc]] BartForCausalLM - forward