XLM
This model was released on 2019-01-22 and added to Hugging Face Transformers on 2020-11-16.
XLM demonstrates cross-lingual pretraining with two approaches, unsupervised training on a single language and supervised training on more than one language with a cross-lingual language model objective. The XLM model supports the causal language modeling objective, masked language modeling, and translation language modeling (an extension of the BERT) masked language modeling objective to multiple language inputs).
You can find all the original XLM checkpoints under the Facebook AI community organization.
The example below demonstrates how to predict the <mask> token with Pipeline, AutoModel and from the command line.
import torchfrom transformers import pipeline
pipeline = pipeline( task="fill-mask", model="facebook/xlm-roberta-xl", dtype=torch.float16, device=0)pipeline("Bonjour, je suis un modèle <mask>.")import torchfrom transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained( "FacebookAI/xlm-mlm-en-2048",)model = AutoModelForMaskedLM.from_pretrained( "FacebookAI/xlm-mlm-en-2048", dtype=torch.float16, device_map="auto",)inputs = tokenizer("Hello, I'm a <mask> model.", return_tensors="pt").to(model.device)
with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits.argmax(dim=-1)
predicted_token = tokenizer.decode(predictions[0][inputs["input_ids"][0] == tokenizer.mask_token_id])print(f"Predicted token: {predicted_token}")echo -e "Plants create <mask> through a process known as photosynthesis." | transformers run --task fill-mask --model FacebookAI/xlm-mlm-en-2048 --device 0XLMConfig
Section titled “XLMConfig”[[autodoc]] XLMConfig
XLMTokenizer
Section titled “XLMTokenizer”[[autodoc]] XLMTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
XLM specific outputs
Section titled “XLM specific outputs”[[autodoc]] models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput
XLMModel
Section titled “XLMModel”[[autodoc]] XLMModel - forward
XLMWithLMHeadModel
Section titled “XLMWithLMHeadModel”[[autodoc]] XLMWithLMHeadModel - forward
XLMForSequenceClassification
Section titled “XLMForSequenceClassification”[[autodoc]] XLMForSequenceClassification - forward
XLMForMultipleChoice
Section titled “XLMForMultipleChoice”[[autodoc]] XLMForMultipleChoice - forward
XLMForTokenClassification
Section titled “XLMForTokenClassification”[[autodoc]] XLMForTokenClassification - forward
XLMForQuestionAnsweringSimple
Section titled “XLMForQuestionAnsweringSimple”[[autodoc]] XLMForQuestionAnsweringSimple - forward
XLMForQuestionAnswering
Section titled “XLMForQuestionAnswering”[[autodoc]] XLMForQuestionAnswering - forward