X

Nanonets-OCR2-3B

Information

Nanonets-OCR2: A model for transforming documents into structured markdown with intelligent content recognition and semantic tagging

Nanonets-OCR2 by [Nanonets](https://nanonets.com) is a family of powerful, state-of-the-art image-to-markdown OCR models that go far beyond traditional text extraction. It transforms documents into structured markdown with intelligent content recognition and semantic tagging, making it ideal for downstream processing by Large Language Models (LLMs). Nanonets-OCR2 is packed with features designed to handle complex documents with ease: * **LaTeX Equation Recognition:** Automatically converts mathematical equations and formulas into properly formatted LaTeX syntax. It distinguishes between inline (\`$...$\`) and display (\`$$...$$\`) equations. * **Intelligent Image Description:** Describes images within documents using structured \`\` tags, making them digestible for LLM processing. It can describe various image types, including logos, charts, graphs and so on, detailing their content, style, and context. * **Signature Detection & Isolation:** Identifies and isolates signatures from other text, outputting them within a \`\` tag. This is crucial for processing legal and business documents. * **Watermark Extraction:** Detects and extracts watermark text from documents, placing it within a \`\` tag. * **Smart Checkbox Handling:** Converts form checkboxes and radio buttons into standardized Unicode symbols (\`\`, \`\`, \`\`) for consistent and reliable processing. * **Complex Table Extraction:** Accurately extracts complex tables from documents and converts them into both markdown and HTML table formats. * **Flow charts & Organisational charts:** Extracts flow charts and organisational as [mermaid](mermaid.js.org) code. * **Handwritten Documents:** The model is trained on handwritten documents across multiple languages. * **Multilingual:** Model is trained on documents of multiple languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Arabic, and many more. * **Visual Question Answering (VQA):** The model is designed to provide the answer directly if it is present in the document; otherwise, it responds with "Not mentioned." ## Nanonets-OCR2 Family | Model | Access Link | | -----|-----| | Nanonets-OCR2-Plus | [Docstrange link](https://docstrange.nanonets.com/) | | Nanonets-OCR2-3B | [ link](https://huggingface.co/nanonets/Nanonets-OCR2-3B) | | Nanonets-OCR2-1.5B-exp | [ link](https://huggingface.co/nanonets/Nanonets-OCR2-1.5B-exp) | ## Usage ### Using transformers \`\`\`python from PIL import Image from transformers import AutoTokenizer, AutoProcessor, AutoModelForImageTextToText model_path = "nanonets/Nanonets-OCR2-3B" model = AutoModelForImageTextToText.from_pretrained( model_path, torch_dtype="auto", device_map="auto", attn_implementation="flash_attention_2" ) model.eval() tokenizer = AutoTokenizer.from_pretrained(model_path) processor = AutoProcessor.from_pretrained(model_path) def ocr_page_with_nanonets_s(image_path, model, processor, max_new_tokens=4096): prompt = """Extract the text from the above document as if you were reading it naturally. Return the tables in html format. Return the equations in LaTeX representation. If there is an image in the document and image caption is not present, add a small description of the image inside the tag; otherwise, add the image caption inside . Watermarks should be wrapped in brackets. Ex: OFFICIAL COPY. Page numbers should be wrapped in brackets. Ex: 14 or 9/22. Prefer using and for check boxes.""" image = Image.open(image_path) messages = [ \{"role": "system", "content": "You are a helpful assistant."\}, \{"role": "user", "content": [ \{"type": "image", "image": f"file://\{image_path\}"\}, \{"type": "text", "text": prompt\}, ]\}, ] text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = processor(text=[text], images=[image], padding=True, return_tensors="pt") inputs = inputs.to(model.device) output_ids = model.generate(**inputs, max_new_tokens=max_new_tokens, do_sample=False) generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)] output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) return output_text[0] image_path = "/path/to/your/document.jpg" result = ocr_page_with_nanonets_s(image_path, model, processor, max_new_tokens=15000) print(result) \`\`\` ### Using vLLM 1. Start the vLLM server. \`\`\`bash vllm serve nanonets/Nanonets-OCR2-3B \`\`\` 2. Predict with the model \`\`\`python from openai import OpenAI import base64 client = OpenAI(api_key="123", base_url="http://localhost:8000/v1") model = "nanonets/Nanonets-OCR2-3B" def encode_image(image_path): with open(image_path, "rb") as image_file: return base64.b64encode(image_file.read()).decode("utf-8") def ocr_page_with_nanonets_s(img_base64): response = client.chat.completions.create( model=model, messages=[ \{ "role": "user", "content": [ \{ "type": "image_url", "image_url": \{"url": f"data:image/png;base64,\{img_base64\}"\}, \}, \{ "type": "text", "text": "Extract the text from the above document as if you were reading it naturally. Return the tables in html format. Return the equations in LaTeX representation. If there is an image in the document and image caption is not present, add a small description of the image inside the tag; otherwise, add the image caption inside . Watermarks should be wrapped in brackets. Ex: OFFICIAL COPY. Page numbers should be wrapped in brackets. Ex: 14 or 9/22. Prefer using and for check boxes.", \}, ], \} ], temperature=0.0, max_tokens=15000 ) return response.choices[0].message.content test_img_path = "/path/to/your/document.jpg" img_base64 = encode_image(test_img_path) print(ocr_page_with_nanonets_s(img_base64)) \`\`\` ### Using Docstrange \`\`\`python import requests url = "https://extraction-api.nanonets.com/extract" headers = \{"Authorization": \} files = \{"file": open("/path/to/your/file", "rb")\} data = \{"output_type": "markdown"\} data["model"] = "nanonets" response = requests.post(url, headers=headers, files=files, data=data) print(response.json()) \`\`\`\` Check out [Docstrange](https://docstrange.nanonets.com/) for more details. ## Evaluation ### Markdown Evaluations #### Nanonets OCR2 Plus
Model Win Rate vs Nanonets OCR2 Plus (%) Lose Rate vs Nanonets OCR2 Plus (%) Both Correct (%)
Gemini 2.5 flash (No Thinking) 34.35 57.60 8.06
Nanonets OCR2 3B 29.37 54.58 16.04
Nanonets-OCR-s 24.86 66.12 9.02
Nanonets OCR2 1.5B exp 13.00 81.20 5.79
GPT-5 (Thinking: low) 23.53 74.86 1.60
#### Nanonets OCR2 3B
Model Win Rate vs Nanonets OCR2 3B (%) Lose Rate vs Nanonets OCR2 3B (%) Both Correct (%)
Gemini 2.5 flash (No Thinking) 39.98 52.43 7.58
Nanonets-OCR-s 30.61 58.28 11.12
Nanonets OCR2 1.5B exp 14.78 79.18 6.04
GPT-5 25.00 72.87 2.13
### Visual Question Answering (VQA) Evaluations
Dataset Nanonets OCR2 Plus Nanonets OCR2 3B Qwen2.5-VL-72B-Instruct Gemini 2.5 Flash
ChartQA (IDP-Leaderboard) 79.20 78.56 76.20 84.82
DocVQA (IDP-Leaderboard) 85.15 89.43 84.00 85.51
## Tips to improve accuracy 1. Increasing the image resolution will improve model's performance. 2. For complex tables (eg. Financial documents) using \`repetition_penalty=1\` gives better results. You can try this prompt also, which generally works better for finantial documents. \`\`\`python user_prompt = """Extract the text from the above document as if you were reading it naturally. Return the tables in HTML format. Return the equations in LaTeX representation. If there is an image in the document and image caption is not present, add a small description of the image inside the tag; otherwise, add the image caption inside . Watermarks should be wrapped in brackets. Ex: OFFICIAL COPY. Page numbers should be wrapped in brackets. Ex: 14 or 9/22. Prefer using and for check boxes. Only return HTML table within
.""" \`\`\` 3. This is already implemented in [Docstrange](https://docstrange.nanonets.com/?output_type=markdown-financial-docs), please use the \`Markdown (Financial Docs)\` option for processing table heavy financial documents. \`\`\`python import requests url = "https://extraction-api.nanonets.com/extract" headers = \{"Authorization": \} files = \{"file": open("/path/to/your/file", "rb")\} data = \{"output_type": "markdown-financial-docs"\} response = requests.post(url, headers=headers, files=files, data=data) print(response.json()) \`\`\` 4. Model might work best on certain resolution for specific document types. Please check the [cookbooks](https://github.com/NanoNets/Nanonets-OCR2/blob/main/Nanonets-OCR2-Cookbook/image2md.ipynb) for details. ## BibTex \`\`\` @misc\{Nanonets-OCR2, title=\{Nanonets-OCR2: A model for transforming documents into structured markdown with intelligent content recognition and semantic tagging\}, author=\{Souvik Mandal and Ashish Talewar and Siddhant Thakuria and Paras Ahuja and Prathamesh Juvatkar\}, year=\{2025\}, \} \`\`\`

Prompts

1

Alibaba Financial Statement Processing

Reviews

Tags


  • BaileyZimX 2025-10-20 11:32
    Interesting:4,Helpfulness:4,Correctness:5,Generation Speed:3
    Prompt: Alibaba Financial Statement Processing

    I just noticed the latest Nanonets OCR model released and tested it using a 300+ pages financial documents of alibaba Q3 reports (https://www1.hkexnews.hk/listedco/listconews/sehk/2025/0626/2025062601064.pdf). At the first shot, the web app docstrange can not process all the documents at once after maybe 10 minutes' waiting time, so I just tried a single page of Share-based Compensation in the financial statements. This time the app produces perfect results. It extracts the contents as well as a table of 2023,2024,2025 years Cost of revenue and more. And I specifically examined the results, and it's all correct. Maybe the processing time is the only issues but the results are still pretty awesome.

Write Your Review

Detailed Ratings

ALL
Correctness
Helpfulness
Interesting
Upload Pictures and Videos

Name
Size
Type
Download
Last Modified
  • Community

Add Discussion

Upload Pictures and Videos