Qari-OCR: A High-Accuracy Model for Arabic Optical Character
Collection
𝐵𝑢𝑖𝑙𝑡 𝑜𝑛 𝑡ℎ𝑒 𝑝𝑜𝑤𝑒𝑟𝑓𝑢𝑙 𝑄𝑤𝑒𝑛2 𝑉𝐿 2𝐵 𝑎𝑛𝑑 𝑓𝑖𝑛𝑒-𝑡𝑢𝑛𝑒𝑑 𝑜𝑛 𝑎𝑛 𝐴𝑟𝑎𝑏𝑖𝑐 𝑂𝐶𝑅 𝑑𝑎𝑡𝑎𝑠𝑒𝑡, 𝑄𝑎𝑟𝑖 𝑣0.1 𝑑𝑒
•
7 items
•
Updated
•
8
Metric | Score |
---|---|
Character Error Rate (CER) | 0.300 |
Word Error Rate (WER) | 0.485 |
BLEU Score | 0.545 |
Training Time | 11 hours |
CO₂ Emissions | 1.88 kg eq. |
While QARI v0.2 achieves better raw text accuracy (CER: 0.061), QARI v0.3 excels in:
You can load this model using the transformers
and qwen_vl_utils
library:
!pip install transformers qwen_vl_utils accelerate>=0.26.0 PEFT -U
!pip install -U bitsandbytes
from PIL import Image
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
import torch
import os
from qwen_vl_utils import process_vision_info
model_name = "NAMAA-Space/Qari-OCR-v0.3-VL-2B-Instruct"
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_name)
max_tokens = 2000
prompt = "Below is the image of one page of a document, as well as some raw textual content that was previously extracted for it. Just return the plain text representation of this document as if you were reading it naturally. Do not hallucinate."
image.save("image.png")
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": f"file://{src}"},
{"type": "text", "text": prompt},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=max_tokens)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
os.remove(src)
print(output_text)
Try the model on Google Colab, Notebook
BibTeX:
@article{wasfy2025qari,
title={QARI-OCR: High-Fidelity Arabic Text Recognition through Multimodal Large Language Model Adaptation},
author={Wasfy, Ahmed and Nacar, Omer and Elkhateb, Abdelakreem and Reda, Mahmoud and Elshehy, Omar and Ammar, Adel and Boulila, Wadii},
journal={arXiv preprint arXiv:2506.02295},
year={2025}
}