File size: 3,963 Bytes
706af8f 4017b4b 706af8f c603a46 706af8f c603a46 706af8f c08671a 4017b4b c08671a 4017b4b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
---
language:
- ar
size_categories:
- 1K<n<10K
task_categories:
- text-generation
dataset_info:
features:
- name: filename
dtype: string
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 926418
num_examples: 1200
download_size: 407863
dataset_size: 926418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# SadeedDiac-25: A Benchmark for Arabic Diacritization
[Paper](https://huggingface.co/papers/2504.21635)
**SadeedDiac-25** is a comprehensive and linguistically diverse benchmark specifically designed for evaluating Arabic diacritization models. It unifies Modern Standard Arabic (MSA) and Classical Arabic (CA) in a single dataset, addressing key limitations in existing benchmarks.
## Overview
Existing Arabic diacritization benchmarks tend to focus on either Classical Arabic (e.g., Fadel, Abbad) or Modern Standard Arabic (e.g., CATT, WikiNews), with limited domain diversity and quality inconsistencies. SadeedDiac-25 addresses these issues by:
- Combining MSA and CA in one dataset
- Covering diverse domains (e.g., news, religion, politics, sports, culinary arts)
- Ensuring high annotation quality through a multi-stage expert review process
- Avoiding contamination from large-scale pretraining corpora
## Dataset Composition
SadeedDiac-25 consists of 1,200 paragraphs:
- **📘 50% Modern Standard Arabic (MSA)**
- 454 paragraphs of curated original MSA content
- 146 paragraphs from WikiNews
- Length: 40–50 words per paragraph
- **📗 50% Classical Arabic (CA)**
- 📖 600 paragraphs from the Fadel test set
## Evaluation Results
We evaluated several models on SadeedDiac-25, including proprietary LLMs and open-source Arabic models. Evaluation metrics include Diacritic Error Rate (DER), Word Error Rate (WER), and hallucination rates.
The evaluation code for this dataset is available at: https://github.com/misraj-ai/Sadeed
### Evaluation Table
| Model | DER (CE) | WER (CE) | DER (w/o CE) | WER (w/o CE) | Hallucinations |
| ------------------------ | ---------- | ---------- | ------------ | ------------ | -------------- |
| Claude-3-7-Sonnet-Latest | **1.3941** | **4.6718** | **0.7693** | **2.3098** | **0.821** |
| GPT-4 | 3.8645 | 5.2719 | 3.8645 | 10.9274 | 1.0242 |
| Gemini-Flash-2.0 | 3.1926 | 7.9942 | 2.3783 | 5.5044 | 1.1713 |
| *Sadeed* | *7.2915* | *13.7425* | *5.2625* | *9.9245* | *7.1946* |
| Aya-23-8B | 25.6274 | 47.4908 | 19.7584 | 40.2478 | 5.7793 |
| ALLaM-7B-Instruct | 50.3586 | 70.3369 | 39.4100 | 67.0920 | 36.5092 |
| Yehia-7B | 50.8801 | 70.2323 | 39.7677 | 67.1520 | 43.1113 |
| Jais-13B | 78.6820 | 99.7541 | 60.7271 | 99.5702 | 61.0803 |
| Gemma-2-9B | 78.8560 | 99.7928 | 60.9188 | 99.5895 | 86.8771 |
| SILMA-9B-Instruct-v1.0 | 78.6567 | 99.7367 | 60.7106 | 99.5586 | 93.6515 |
> **Note**: CE = Case Ending
## Citation
If you use SadeedDiac-25 in your work, please cite:
## Citation
If you use this dataset, please cite:
```bibtex
@misc{aldallal2025sadeedadvancingarabicdiacritization,
title={Sadeed: Advancing Arabic Diacritization Through Small Language Model},
author={Zeina Aldallal and Sara Chrouf and Khalil Hennara and Mohamed Motaism Hamed and Muhammad Hreden and Safwan AlModhayan},
year={2025},
eprint={2504.21635},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.21635},
}
```
## License
📄 This dataset is released under the CC BY-NC-SA 4.0 License.
## Contact
📬 For questions, contact [Misraj-AI](https://misraj.ai/) on Hugging Face. |