Skip to content

Switch Transformers

This model was released on 2021-01-11 and added to Hugging Face Transformers on 2022-11-15.

PyTorch

Switch Transformers is a sparse T5 model where the MLP layer is replaced by a Mixture-of-Experts (MoE). A routing mechanism associates each token with an expert and each expert is a dense MLP. Sparsity enables better scaling and the routing mechanism allows the model to select relevant weights on the fly which increases model capacity.

You can find all the original Switch Transformers checkpoints under the Switch Transformer collection.

Click on the Switch Transformers models in the right sidebar for more examples of how to apply Switch Transformers to different natural language tasks.

The example below demonstrates how to predict the masked token with Pipeline, AutoModel, and from the command line.

import torch
from transformers import pipeline
pipeline = pipeline(
task="text2text-generation",
model="google/switch-base-8",
dtype=torch.float16,
device=0
)
print(pipeline("The capital of France is <extra_id_0>."))
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
model = AutoModelForSeq2SeqLM.from_pretrained("google/switch-base-8", device_map="auto", dtype=torch.float16)
input_text = "The capital of France is <extra_id_0>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
Terminal window
echo -e "The capital of France is <extra_id_0>." | transformers run --task text2text-generation --model google/switch-base-8 --device 0
# [{'generated_text': 'Paris.'}]

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses bitsandbytes to only quantize the weights to 8-bits.

# pip install bitsandbytes
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, BitsAndBytesConfig
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForSeq2SeqLM.from_pretrained("google/switch-base-8", device_map="auto", quantization_config=quantization_config)
input_text = "The capital of France is <extra_id_0>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))

[[autodoc]] SwitchTransformersConfig

[[autodoc]] SwitchTransformersTop1Router - forward

[[autodoc]] SwitchTransformersSparseMLP - forward

[[autodoc]] SwitchTransformersModel - forward

SwitchTransformersForConditionalGeneration

Section titled “SwitchTransformersForConditionalGeneration”

[[autodoc]] SwitchTransformersForConditionalGeneration - forward

[[autodoc]] SwitchTransformersEncoderModel - forward