Qwen2MoE
This model was released on 2024-07-15 and added to Hugging Face Transformers on 2024-03-27.
Qwen2MoE
Section titled “Qwen2MoE”Qwen2MoE is a Mixture-of-Experts (MoE) variant of Qwen2, available as a base model and an aligned chat model. It uses SwiGLU activation, group query attention and a mixture of sliding window attention and full attention. The tokenizer can also be adapted to multiple languages and codes.
The MoE architecture uses upcyled models from the dense language models. For example, Qwen1.5-MoE-A2.7B is upcycled from Qwen-1.8B. It has 14.3B parameters but only 2.7B parameters are activated during runtime.
You can find all the original checkpoints in the Qwen1.5 collection.
The example below demonstrates how to generate text with Pipeline, AutoModel, and from the command line.
import torchfrom transformers import pipeline
pipe = pipeline( task="text-generation", model="Qwen/Qwen1.5-MoE-A2.7B", dtype=torch.bfloat16, device_map=0)
messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Tell me about the Qwen2 model family."},]outputs = pipe(messages, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)print(outputs[0]["generated_text"][-1]['content'])import torchfrom transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-MoE-A2.7B-Chat", dtype=torch.bfloat16, device_map="auto", attn_implementation="sdpa")tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat")
prompt = "Give me a short introduction to large language models."messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt}]text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True)model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate( model_inputs.input_ids, cache_implementation="static", max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]print(response)Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses bitsandbytes to quantize the weights to 8-bits.
# pip install -U flash-attn --no-build-isolationimport torchfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig( load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat")model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-MoE-A2.7B-Chat", dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config, attn_implementation="flash_attention_2")
inputs = tokenizer("The Qwen2 model family is", return_tensors="pt").to(model.device)outputs = model.generate(**inputs, max_new_tokens=100)print(tokenizer.decode(outputs[0], skip_special_tokens=True))Qwen2MoeConfig
Section titled “Qwen2MoeConfig”[[autodoc]] Qwen2MoeConfig
Qwen2MoeModel
Section titled “Qwen2MoeModel”[[autodoc]] Qwen2MoeModel - forward
Qwen2MoeForCausalLM
Section titled “Qwen2MoeForCausalLM”[[autodoc]] Qwen2MoeForCausalLM - forward
Qwen2MoeForSequenceClassification
Section titled “Qwen2MoeForSequenceClassification”[[autodoc]] Qwen2MoeForSequenceClassification - forward
Qwen2MoeForTokenClassification
Section titled “Qwen2MoeForTokenClassification”[[autodoc]] Qwen2MoeForTokenClassification - forward
Qwen2MoeForQuestionAnswering
Section titled “Qwen2MoeForQuestionAnswering”[[autodoc]] Qwen2MoeForQuestionAnswering - forward