MPT
This model was released on 2023-05-05 and added to Hugging Face Transformers on 2023-07-25.
Overview
Section titled “Overview”The MPT model was proposed by the MosaicML team and released with multiple sizes and finetuned variants. The MPT models are a series of open source and commercially usable LLMs pre-trained on 1T tokens.
MPT models are GPT-style decoder-only transformers with several improvements: performance-optimized layer implementations, architecture changes that provide greater training stability, and the elimination of context length limits by replacing positional embeddings with ALiBi.
- MPT base: MPT base pre-trained models on next token prediction
- MPT instruct: MPT base models fine-tuned on instruction based tasks
- MPT storywriter: MPT base models fine-tuned for 2500 steps on 65k-token excerpts of fiction books contained in the books3 corpus, this enables the model to handle very long sequences
The original code is available at the llm-foundry repository.
Read more about it in the release blogpost
Usage tips
Section titled “Usage tips”- Learn more about some techniques behind training of the model in this section of llm-foundry repository
- If you want to use the advanced version of the model (triton kernels, direct flash attention integration), you can still use the original model implementation by adding
trust_remote_code=Truewhen callingfrom_pretrained.
Resources
Section titled “Resources”- Fine-tuning Notebook on how to fine-tune MPT-7B on a free Google Colab instance to turn the model into a Chatbot.
MptConfig
Section titled “MptConfig”[[autodoc]] MptConfig - all
MptModel
Section titled “MptModel”[[autodoc]] MptModel - forward
MptForCausalLM
Section titled “MptForCausalLM”[[autodoc]] MptForCausalLM - forward
MptForSequenceClassification
Section titled “MptForSequenceClassification”[[autodoc]] MptForSequenceClassification - forward
MptForTokenClassification
Section titled “MptForTokenClassification”[[autodoc]] MptForTokenClassification - forward
MptForQuestionAnswering
Section titled “MptForQuestionAnswering”[[autodoc]] MptForQuestionAnswering - forward