ocr / CLAUDE.md
davanstrien's picture
davanstrien HF Staff
Document LightOnOCR-2 vLLM nightly regression
4388151

OCR Scripts - Development Notes

Active Scripts

DeepSeek-OCR v1 (deepseek-ocr-vllm.py)

Production Ready

  • Fully supported by vLLM
  • Fast batch processing
  • Tested and working on HF Jobs

LightOnOCR-2-1B (lighton-ocr2.py)

⚠️ Temporarily Broken (2026-01-29)

Status: vLLM nightly regression - image processor loading fails

What happened:

  • Script was working with vLLM nightly v0.15.0rc2.dev73
  • Nightly updated to v0.15.0rc2.dev81 and broke
  • Error: OSError: Can't load image processor for 'lightonai/LightOnOCR-2-1B'
  • Both nightly and stable vLLM 0.14.x have this issue now

Initial test results (before breakage):

  • 8/10 samples had good OCR output
  • 2/10 samples showed repetition loops (likely due to max_tokens=6144)
  • Changed max_tokens default from 6144 → 4096 (per model card recommendation)

Fixes applied:

  • max_tokens: 6144 → 4096 (model card recommends 4096 for arXiv papers)
  • Fixed pyarrow compatibility (>=17.0.0,<18.0.0)
  • Replaced deprecated huggingface-hub[hf_transfer] with hf-xet

To verify when vLLM is fixed:

hf jobs uv run --flavor a100-large \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
    davanstrien/ufo-ColPali davanstrien/lighton-ocr2-test-v3 \
    --max-samples 10 --shuffle --seed 42

Model Info:

  • Model: lightonai/LightOnOCR-2-1B
  • Architecture: Pixtral ViT encoder + Qwen3 LLM
  • Training: RLVR (Reinforcement Learning with Verifiable Rewards)
  • Performance: 83.2% on OlmOCR-Bench, 42.8 pages/sec on H100

Pending Development

DeepSeek-OCR-2 (Visual Causal Flow Architecture)

Status: ⏳ Waiting for vLLM upstream support

Context: DeepSeek-OCR-2 is the next generation OCR model (3B parameters) with Visual Causal Flow architecture offering improved quality. We attempted to create a UV script (deepseek-ocr2-vllm.py) but encountered a blocker.

Blocker: vLLM does not yet support DeepseekOCR2ForCausalLM architecture in the official release.

PR to Watch: 🔗 https://github.com/vllm-project/vllm/pull/33165

This PR adds DeepSeek-OCR-2 support but is currently:

  • ⚠️ Open (not merged)
  • Has unresolved review comments
  • Pre-commit checks failing
  • Issues: hardcoded parameters, device mismatch bugs, missing error handling

What's Needed:

  1. PR #33165 needs to be reviewed, fixed, and merged
  2. vLLM needs to release a version including the merge
  3. Then we can add these dependencies to our script:
    # dependencies = [
    #     "datasets>=4.0.0",
    #     "huggingface-hub",
    #     "pillow",
    #     "vllm",
    #     "tqdm",
    #     "toolz",
    #     "torch",
    #     "addict",
    #     "matplotlib",
    # ]
    

Implementation Progress:

  • ✅ Created deepseek-ocr2-vllm.py script
  • ✅ Fixed dependency issues (pyarrow, datasets>=4.0.0)
  • ✅ Tested script structure on HF Jobs
  • ❌ Blocked: vLLM doesn't recognize architecture

Partial Implementation: The file deepseek-ocr2-vllm.py exists in this repo but is not functional until vLLM support lands. Consider it a draft.

Testing Evidence: When we ran on HF Jobs, we got:

ValidationError: Model architectures ['DeepseekOCR2ForCausalLM'] are not supported for now.
Supported architectures: [...'DeepseekOCRForCausalLM'...]

Next Steps (when PR merges):

  1. Update deepseek-ocr2-vllm.py dependencies to include addict and matplotlib
  2. Test on HF Jobs with small dataset (10 samples)
  3. Verify output quality
  4. Update README.md with DeepSeek-OCR-2 section
  5. Document v1 vs v2 differences

Alternative Approaches (if urgent):

  • Create transformers-based script (slower, no vLLM batching)
  • Use DeepSeek's official repo setup (complex, not UV-script compatible)

Model Information:

Resolution Modes (for v2):

RESOLUTION_MODES = {
    "tiny": {"base_size": 512, "image_size": 512, "crop_mode": False},
    "small": {"base_size": 640, "image_size": 640, "crop_mode": False},
    "base": {"base_size": 1024, "image_size": 768, "crop_mode": False},  # v2 optimized
    "large": {"base_size": 1280, "image_size": 1024, "crop_mode": False},
    "gundam": {"base_size": 1024, "image_size": 768, "crop_mode": True},  # v2 optimized
}

Other OCR Scripts

Nanonets OCR (nanonets-ocr.py, nanonets-ocr2.py)

✅ Both versions working

PaddleOCR-VL (paddleocr-vl.py)

✅ Working


Last Updated: 2026-01-29 Watch PRs: