CamemBERT
This model was released on 2019-11-10 and added to Hugging Face Transformers on 2020-11-16.
CamemBERT
Section titled “CamemBERT”CamemBERT is a language model based on RoBERTa, but trained specifically on French text from the OSCAR dataset, making it more effective for French language tasks.
What sets CamemBERT apart is that it learned from a huge, high quality collection of French data, as opposed to mixing lots of languages. This helps it really understand French better than many multilingual models.
Common applications of CamemBERT include masked language modeling (Fill-mask prediction), text classification (sentiment analysis), token classification (entity recognition) and sentence pair classification (entailment tasks).
You can find all the original CamemBERT checkpoints under the ALMAnaCH organization.
Click on the CamemBERT models in the right sidebar for more examples of how to apply CamemBERT to different NLP tasks.
The examples below demonstrate how to predict the <mask> token with Pipeline, AutoModel, and from the command line.
import torchfrom transformers import pipeline
pipeline = pipeline("fill-mask", model="camembert-base", dtype=torch.float16, device=0)pipeline("Le camembert est un délicieux fromage <mask>.")import torchfrom transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("camembert-base")model = AutoModelForMaskedLM.from_pretrained("camembert-base", dtype="auto", device_map="auto", attn_implementation="sdpa")inputs = tokenizer("Le camembert est un délicieux fromage <mask>.", return_tensors="pt").to(model.device)
with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits
masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1]predicted_token_id = predictions[0, masked_index].argmax(dim=-1)predicted_token = tokenizer.decode(predicted_token_id)
print(f"The predicted token is: {predicted_token}")echo -e "Le camembert est un délicieux fromage <mask>." | transformers run --task fill-mask --model camembert-base --device 0Quantization reduces the memory burden of large models by representing weights in lower precision. Refer to the Quantization overview for available options.
The example below uses bitsandbytes quantization to quantize the weights to 8-bits.
from transformers import AutoTokenizer, AutoModelForMaskedLM, BitsAndBytesConfigimport torch
quant_config = BitsAndBytesConfig(load_in_8bit=True)model = AutoModelForMaskedLM.from_pretrained( "almanach/camembert-large", quantization_config=quant_config, device_map="auto")tokenizer = AutoTokenizer.from_pretrained("almanach/camembert-large")
inputs = tokenizer("Le camembert est un délicieux fromage <mask>.", return_tensors="pt").to(model.device)
with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits
masked_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]predicted_token_id = predictions[0, masked_index].argmax(dim=-1)predicted_token = tokenizer.decode(predicted_token_id)
print(f"The predicted token is: {predicted_token}")CamembertConfig
Section titled “CamembertConfig”[[autodoc]] CamembertConfig
CamembertTokenizer
Section titled “CamembertTokenizer”[[autodoc]] CamembertTokenizer - get_special_tokens_mask - save_vocabulary
CamembertTokenizerFast
Section titled “CamembertTokenizerFast”[[autodoc]] CamembertTokenizerFast
CamembertModel
Section titled “CamembertModel”[[autodoc]] CamembertModel
CamembertForCausalLM
Section titled “CamembertForCausalLM”[[autodoc]] CamembertForCausalLM
CamembertForMaskedLM
Section titled “CamembertForMaskedLM”[[autodoc]] CamembertForMaskedLM
CamembertForSequenceClassification
Section titled “CamembertForSequenceClassification”[[autodoc]] CamembertForSequenceClassification
CamembertForMultipleChoice
Section titled “CamembertForMultipleChoice”[[autodoc]] CamembertForMultipleChoice
CamembertForTokenClassification
Section titled “CamembertForTokenClassification”[[autodoc]] CamembertForTokenClassification
CamembertForQuestionAnswering
Section titled “CamembertForQuestionAnswering”[[autodoc]] CamembertForQuestionAnswering