viewer: false
tags:
- uv-script
- ocr
- vision-language-model
- document-processing
OCR UV Scripts
Part of uv-scripts - ready-to-run ML tools powered by UV
Ready-to-run OCR scripts that work with uv run
- no setup required!
🚀 Quick Start with HuggingFace Jobs
Run OCR on any dataset without needing your own GPU:
# Quick test with 10 samples
hf jobs uv run --flavor l4x1 \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
your-input-dataset your-output-dataset \
--max-samples 10
That's it! The script will:
- ✅ Process first 10 images from your dataset
- ✅ Add OCR results as a new
markdown
column - ✅ Push the results to a new dataset
- 📊 View results at:
https://huggingface.co/datasets/[your-output-dataset]
📋 Available Scripts
Nanonets OCR (nanonets-ocr.py
)
State-of-the-art document OCR using nanonets/Nanonets-OCR-s that handles:
- 📐 LaTeX equations - Mathematical formulas preserved
- 📊 Tables - Extracted as HTML format
- 📝 Document structure - Headers, lists, formatting maintained
- 🖼️ Images - Captions and descriptions included
- ☑️ Forms - Checkboxes rendered as ☐/☑
dots.ocr (dots-ocr.py
)
Advanced document layout analysis and OCR using rednote-hilab/dots.ocr that provides:
- 🎯 Layout detection - Bounding boxes for all document elements
- 📑 Category classification - Text, Title, Table, Formula, Picture, etc.
- 📖 Reading order - Preserves natural reading flow
- 🌍 Multilingual support - Handles multiple languages seamlessly
- 🔧 Flexible output - JSON, structured columns, or markdown
💻 Usage Examples
Run on HuggingFace Jobs (Recommended)
No GPU? No problem! Run on HF infrastructure:
# Basic OCR job
hf jobs uv run --flavor l4x1 \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
your-input-dataset your-output-dataset
# Document layout analysis with dots.ocr
hf jobs uv run --flavor l4x1 \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \
your-input-dataset your-layout-dataset \
--mode layout-all \
--output-format structured \
--use-transformers # More compatible backend
# Real example with UFO dataset 🛸
hf jobs uv run \
--flavor a10g-large \
--image vllm/vllm-openai:latest \
-s HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
davanstrien/ufo-ColPali \
your-username/ufo-ocr \
--image-column image \
--max-model-len 16384 \
--batch-size 128
# Private dataset with custom settings
hf jobs uv run --flavor l40sx1 \
-s HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
private-input private-output \
--private \
--batch-size 32
Python API
from huggingface_hub import run_uv_job
job = run_uv_job(
"https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py",
args=["input-dataset", "output-dataset", "--batch-size", "16"],
flavor="l4x1"
)
Run Locally (Requires GPU)
# Clone and run
git clone https://huggingface.co/datasets/uv-scripts/ocr
cd ocr
uv run nanonets-ocr.py input-dataset output-dataset
# Or run directly from URL
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
input-dataset output-dataset
# dots.ocr examples
uv run dots-ocr.py documents analyzed-docs # Full layout + OCR
uv run dots-ocr.py scans layouts --mode layout-only # Layout only
uv run dots-ocr.py papers markdown --output-format markdown # As markdown
📁 Works With
Any HuggingFace dataset containing images - documents, forms, receipts, books, handwriting.
🎛️ Configuration Options
Common Options (Both Scripts)
Option | Default | Description |
---|---|---|
--image-column |
image |
Column containing images |
--batch-size |
32 |
Images processed together |
--max-model-len |
8192 /24000 * |
Max context length |
--max-tokens |
4096 /16384 * |
Max output tokens |
--gpu-memory-utilization |
0.8 |
GPU memory usage (0.0-1.0) |
--split |
train |
Dataset split to process |
--max-samples |
None | Limit samples (for testing) |
--private |
False | Make output dataset private |
*dots.ocr uses higher defaults (24000/16384)
dots.ocr Specific Options
Option | Default | Description |
---|---|---|
--mode |
layout-all |
Processing mode: layout-all , layout-only , ocr , grounding-ocr |
--output-format |
json |
Output format: json , structured , markdown |
--filter-category |
None | Filter by layout category (e.g., Table , Formula ) |
--output-column |
dots_ocr_output |
Column name for JSON output |
--bbox-column |
layout_bboxes |
Column for bounding boxes (structured mode) |
--category-column |
layout_categories |
Column for categories (structured mode) |
--text-column |
layout_texts |
Column for texts (structured mode) |
--markdown-column |
markdown |
Column for markdown output |
--use-transformers |
False |
Use transformers backend instead of vLLM (more compatible) |
💡 Performance tip: Increase batch size for faster processing (e.g., --batch-size 128
for A10G GPUs)
⚠️ dots.ocr Note: If you encounter vLLM initialization errors, use --use-transformers
for a more compatible (but slower) backend.
More OCR VLM Scripts coming soon! Stay tuned for updates!