DeepSeek-OCR INT4 Merged Model

This is a merged version of the DeepSeek-OCR model that was quantized to INT4 and then dequantized back to full precision.

Model Details

  • Base Model: deepseek-ai/DeepSeek-OCR
  • Quantization: INT4 (during storage)
  • Final Format: Full precision (FP32/FP16)
  • Source: Quantized weights from SamMikaelson/deepseekocr-randomreal

Usage

from transformers import AutoModelForCausalLM, AutoProcessor
from PIL import Image

# Load model and processor
model = AutoModelForCausalLM.from_pretrained(
    "YOUR_USERNAME/OCR-int4-merged",
    trust_remote_code=True
)
processor = AutoProcessor.from_pretrained(
    "YOUR_USERNAME/OCR-int4-merged",
    trust_remote_code=True
)

# Process image
image = Image.open("document.jpg")
inputs = processor(images=image, return_tensors="pt")

# Generate OCR output
outputs = model.generate(**inputs, max_new_tokens=512)
text = processor.decode(outputs[0], skip_special_tokens=True)
print(text)

Notes

This model was created by:

  1. Quantizing the original DeepSeek-OCR to INT4
  2. Dequantizing back to full precision
  3. Merging into a standard model format

The process allows for efficient storage and distribution while maintaining full model capabilities.

Downloads last month
14
Safetensors
Model size
1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SamMikaelson/ocr_random_merged

Finetuned
(105)
this model