Persimmon
This model was released on 2023-09-07 and added to Hugging Face Transformers on 2023-09-12.
Persimmon
Section titled “Persimmon”
Overview
Section titled “Overview”The Persimmon model was created by ADEPT, and authored by Erich Elsen, Augustus Odena, Maxwell Nye, Sağnak Taşırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani.
The authors introduced Persimmon-8B, a decoder model based on the classic transformers architecture, with query and key normalization. Persimmon-8B is a fully permissively-licensed model with approximately 8 billion parameters, released under the Apache license. Some of the key attributes of Persimmon-8B are long context size (16K), performance, and capabilities for multimodal extensions.
The authors showcase their approach to model evaluation, focusing on practical text generation, mirroring how users interact with language models. The work also includes a comparative analysis, pitting Persimmon-8B against other prominent models (MPT 7B Instruct and Llama 2 Base 7B 1-Shot), across various evaluation tasks. The results demonstrate Persimmon-8B’s competitive performance, even with limited training data.
In terms of model details, the work outlines the architecture and training methodology of Persimmon-8B, providing insights into its design choices, sequence length, and dataset composition. The authors present a fast inference code that outperforms traditional implementations through operator fusion and CUDA graph utilization while maintaining code coherence. They express their anticipation of how the community will leverage this contribution to drive innovation, hinting at further upcoming releases as part of an ongoing series of developments.
This model was contributed by ArthurZ. The original code can be found here.
Usage tips
Section titled “Usage tips”The Persimmon models were trained using bfloat16, but the original inference uses float16 The checkpoints uploaded on the hub use dtype = 'float16' which will be
used by the AutoModel API to cast the checkpoints from torch.float32 to torch.float16.
The dtype of the online weights is mostly irrelevant, unless you are using dtype="auto" when initializing a model using model = AutoModelForCausalLM.from_pretrained("path", dtype = "auto"). The reason is that the model will first be downloaded ( using the dtype of the checkpoints online) then it will be cast to the default dtype of torch (becomes torch.float32). Users should specify the dtype they want, and if they don’t it will be torch.float32.
Finetuning the model in float16 is not recommended and known to produce nan, as such the model should be fine-tuned in bfloat16.
Tips:
- To convert the model, you need to clone the original repository using
git clone https://github.com/persimmon-ai-labs/adept-inference, then get the checkpoints:
git clone https://github.com/persimmon-ai-labs/adept-inferencewget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_base_model_release.tartar -xvf 8b_base_model_release.tarpython src/transformers/models/persimmon/convert_persimmon_weights_to_hf.py --input_dir /path/to/downloaded/persimmon/weights/ --output_dir /output/path \ --pt_model_path /path/to/8b_chat_model_release/iter_0001251/mp_rank_00/model_optim_rng.pt --ada_lib_path /path/to/adept-inferenceFor the chat model:
wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_chat_model_release.tartar -xvf 8b_base_model_release.tarThereafter, models can be loaded via:
from transformers import PersimmonForCausalLM, PersimmonTokenizer
model = PersimmonForCausalLM.from_pretrained("/output/path")tokenizer = PersimmonTokenizer.from_pretrained("/output/path")-
Perismmon uses a
sentencepiecebased tokenizer, with aUnigrammodel. It supports bytefallback, which is only available intokenizers==0.14.0for the fast tokenizer. TheLlamaTokenizeris used as it is a standard wrapper around sentencepiece. Thechattemplate will be updated with the templating functions in a follow up PR! -
The authors suggest to use the following prompt format for the chat mode:
f"human: {prompt}\n\nadept:"
PersimmonConfig
Section titled “PersimmonConfig”[[autodoc]] PersimmonConfig
PersimmonModel
Section titled “PersimmonModel”[[autodoc]] PersimmonModel - forward
PersimmonForCausalLM
Section titled “PersimmonForCausalLM”[[autodoc]] PersimmonForCausalLM - forward
PersimmonForSequenceClassification
Section titled “PersimmonForSequenceClassification”[[autodoc]] PersimmonForSequenceClassification - forward
PersimmonForTokenClassification
Section titled “PersimmonForTokenClassification”[[autodoc]] PersimmonForTokenClassification - forward