Donut
This model was released on 2021-11-30 and added to Hugging Face Transformers on 2022-08-12.
Donut (Document Understanding Transformer) is a visual document understanding model that doesn’t require an Optical Character Recognition (OCR) engine. Unlike traditional approaches that extract text using OCR before processing, Donut employs an end-to-end Transformer-based architecture to directly analyze document images. This eliminates OCR-related inefficiencies making it more accurate and adaptable to diverse languages and formats.
Donut features vision encoder (Swin) and a text decoder (BART). Swin converts document images into embeddings and BART processes them into meaningful text sequences.
You can find all the original Donut checkpoints under the Naver Clova Information Extraction organization.
The examples below demonstrate how to perform document understanding tasks using Donut with Pipeline and AutoModel
# pip install datasetsimport torchfrom transformers import pipelinefrom PIL import Image
pipeline = pipeline( task="document-question-answering", model="naver-clova-ix/donut-base-finetuned-docvqa", device=0, dtype=torch.float16)dataset = load_dataset("hf-internal-testing/example-documents", split="test")image = dataset[0]["image"]
pipeline(image=image, question="What time is the coffee break?")# pip install datasetsimport torchfrom datasets import load_datasetfrom transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")model = AutoModelForImageTextToText.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
dataset = load_dataset("hf-internal-testing/example-documents", split="test")image = dataset[0]["image"]question = "What time is the coffee break?"task_prompt = f"<s_docvqa><s_question>{question}</s_question><s_answer>"inputs = processor(image, task_prompt, return_tensors="pt")
outputs = model.generate( input_ids=inputs.input_ids, pixel_values=inputs.pixel_values, max_length=512)answer = processor.decode(outputs[0], skip_special_tokens=True)print(answer)Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to int4.
# pip install datasets torchaoimport torchfrom datasets import load_datasetfrom transformers import TorchAoConfig, AutoProcessor, AutoModelForImageTextToText
quantization_config = TorchAoConfig("int4_weight_only", group_size=128)processor = AutoProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")model = AutoModelForImageTextToText.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa", quantization_config=quantization_config)
dataset = load_dataset("hf-internal-testing/example-documents", split="test")image = dataset[0]["image"]question = "What time is the coffee break?"task_prompt = f"<s_docvqa><s_question>{question}</s_question><s_answer>"inputs = processor(image, task_prompt, return_tensors="pt")
outputs = model.generate( input_ids=inputs.input_ids, pixel_values=inputs.pixel_values, max_length=512)answer = processor.decode(outputs[0], skip_special_tokens=True)print(answer)-
Use Donut for document image classification as shown below.
>>> import re>>> from transformers import DonutProcessor, VisionEncoderDecoderModel>>> from accelerate import Accelerator>>> from datasets import load_dataset>>> import torch>>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip")>>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip")>>> device = Accelerator().device>>> model.to(device) # doctest: +IGNORE_RESULT>>> # load document image>>> dataset = load_dataset("hf-internal-testing/example-documents", split="test")>>> image = dataset[1]["image"]>>> # prepare decoder inputs>>> task_prompt = "<s_rvlcdip>">>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids>>> pixel_values = processor(image, return_tensors="pt").pixel_values>>> outputs = model.generate(... pixel_values.to(device),... decoder_input_ids=decoder_input_ids.to(device),... max_length=model.decoder.config.max_position_embeddings,... pad_token_id=processor.tokenizer.pad_token_id,... eos_token_id=processor.tokenizer.eos_token_id,... use_cache=True,... bad_words_ids=[[processor.tokenizer.unk_token_id]],... return_dict_in_generate=True,... )>>> sequence = processor.batch_decode(outputs.sequences)[0]>>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")>>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token>>> print(processor.token2json(sequence)){'class': 'advertisement'} -
Use Donut for document parsing as shown below.
>>> import re>>> from accelerate import Accelerator>>> from datasets import load_dataset>>> from transformers import DonutProcessor, VisionEncoderDecoderModel>>> import torch>>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")>>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")>>> device = Accelerator().device>>> model.to(device) # doctest: +IGNORE_RESULT>>> # load document image>>> dataset = load_dataset("hf-internal-testing/example-documents", split="test")>>> image = dataset[2]["image"]>>> # prepare decoder inputs>>> task_prompt = "<s_cord-v2>">>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids>>> pixel_values = processor(image, return_tensors="pt").pixel_values>>> outputs = model.generate(... pixel_values.to(device),... decoder_input_ids=decoder_input_ids.to(device),... max_length=model.decoder.config.max_position_embeddings,... pad_token_id=processor.tokenizer.pad_token_id,... eos_token_id=processor.tokenizer.eos_token_id,... use_cache=True,... bad_words_ids=[[processor.tokenizer.unk_token_id]],... return_dict_in_generate=True,... )>>> sequence = processor.batch_decode(outputs.sequences)[0]>>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")>>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token>>> print(processor.token2json(sequence)){'menu': {'nm': 'CINNAMON SUGAR', 'unitprice': '17,000', 'cnt': '1 x', 'price': '17,000'}, 'sub_total': {'subtotal_price': '17,000'}, 'total':{'total_price': '17,000', 'cashprice': '20,000', 'changeprice': '3,000'}}
DonutSwinConfig
Section titled “DonutSwinConfig”[[autodoc]] DonutSwinConfig
DonutImageProcessor
Section titled “DonutImageProcessor”[[autodoc]] DonutImageProcessor - preprocess
DonutImageProcessorFast
Section titled “DonutImageProcessorFast”[[autodoc]] DonutImageProcessorFast - preprocess
DonutProcessor
Section titled “DonutProcessor”[[autodoc]] DonutProcessor - call - from_pretrained - save_pretrained - batch_decode - decode
DonutSwinModel
Section titled “DonutSwinModel”[[autodoc]] DonutSwinModel - forward
DonutSwinForImageClassification
Section titled “DonutSwinForImageClassification”[[autodoc]] transformers.DonutSwinForImageClassification - forward