Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Arabic
ArXiv:
Libraries:
Datasets
pandas
SadeedDiac-25 / README.md
nielsr's picture
nielsr HF Staff
Add link to paper
4017b4b verified
|
raw
history blame
3.96 kB
metadata
language:
  - ar
size_categories:
  - 1K<n<10K
task_categories:
  - text-generation
dataset_info:
  features:
    - name: filename
      dtype: string
    - name: ground_truth
      dtype: string
  splits:
    - name: train
      num_bytes: 926418
      num_examples: 1200
  download_size: 407863
  dataset_size: 926418
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

SadeedDiac-25: A Benchmark for Arabic Diacritization

Paper

SadeedDiac-25 is a comprehensive and linguistically diverse benchmark specifically designed for evaluating Arabic diacritization models. It unifies Modern Standard Arabic (MSA) and Classical Arabic (CA) in a single dataset, addressing key limitations in existing benchmarks.

Overview

Existing Arabic diacritization benchmarks tend to focus on either Classical Arabic (e.g., Fadel, Abbad) or Modern Standard Arabic (e.g., CATT, WikiNews), with limited domain diversity and quality inconsistencies. SadeedDiac-25 addresses these issues by:

  • Combining MSA and CA in one dataset
  • Covering diverse domains (e.g., news, religion, politics, sports, culinary arts)
  • Ensuring high annotation quality through a multi-stage expert review process
  • Avoiding contamination from large-scale pretraining corpora

Dataset Composition

SadeedDiac-25 consists of 1,200 paragraphs:

  • 📘 50% Modern Standard Arabic (MSA)

    • 454 paragraphs of curated original MSA content
    • 146 paragraphs from WikiNews
    • Length: 40–50 words per paragraph
  • 📗 50% Classical Arabic (CA)

    • 📖 600 paragraphs from the Fadel test set

Evaluation Results

We evaluated several models on SadeedDiac-25, including proprietary LLMs and open-source Arabic models. Evaluation metrics include Diacritic Error Rate (DER), Word Error Rate (WER), and hallucination rates. The evaluation code for this dataset is available at: https://github.com/misraj-ai/Sadeed

Evaluation Table

Model DER (CE) WER (CE) DER (w/o CE) WER (w/o CE) Hallucinations
Claude-3-7-Sonnet-Latest 1.3941 4.6718 0.7693 2.3098 0.821
GPT-4 3.8645 5.2719 3.8645 10.9274 1.0242
Gemini-Flash-2.0 3.1926 7.9942 2.3783 5.5044 1.1713
Sadeed 7.2915 13.7425 5.2625 9.9245 7.1946
Aya-23-8B 25.6274 47.4908 19.7584 40.2478 5.7793
ALLaM-7B-Instruct 50.3586 70.3369 39.4100 67.0920 36.5092
Yehia-7B 50.8801 70.2323 39.7677 67.1520 43.1113
Jais-13B 78.6820 99.7541 60.7271 99.5702 61.0803
Gemma-2-9B 78.8560 99.7928 60.9188 99.5895 86.8771
SILMA-9B-Instruct-v1.0 78.6567 99.7367 60.7106 99.5586 93.6515

Note: CE = Case Ending

Citation

If you use SadeedDiac-25 in your work, please cite:

Citation

If you use this dataset, please cite:

@misc{aldallal2025sadeedadvancingarabicdiacritization,
      title={Sadeed: Advancing Arabic Diacritization Through Small Language Model}, 
      author={Zeina Aldallal and Sara Chrouf and Khalil Hennara and Mohamed Motaism Hamed and Muhammad Hreden and Safwan AlModhayan},
      year={2025},
      eprint={2504.21635},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.21635}, 
}

License

📄 This dataset is released under the CC BY-NC-SA 4.0 License.

Contact

📬 For questions, contact Misraj-AI on Hugging Face.