|
--- |
|
dataset_info: |
|
- config_name: v1_2024 |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: description |
|
dtype: string |
|
- name: time_limit |
|
dtype: int64 |
|
- name: memory_limit |
|
dtype: int64 |
|
- name: checker |
|
dtype: string |
|
- name: test_cases |
|
list: |
|
- name: input |
|
dtype: string |
|
- name: output |
|
dtype: string |
|
- name: year |
|
dtype: int64 |
|
- name: date |
|
dtype: string |
|
- name: difficulty |
|
dtype: string |
|
- name: contest_category |
|
dtype: string |
|
- name: contest_name |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 20187500547 |
|
num_examples: 400 |
|
download_size: 12737762718 |
|
dataset_size: 20187500547 |
|
- config_name: v1_2025 |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: description |
|
dtype: string |
|
- name: time_limit |
|
dtype: int64 |
|
- name: memory_limit |
|
dtype: int64 |
|
- name: checker |
|
dtype: string |
|
- name: year |
|
dtype: int64 |
|
- name: date |
|
dtype: string |
|
- name: difficulty |
|
dtype: string |
|
- name: contest_category |
|
dtype: string |
|
- name: contest_name |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 201028 |
|
num_examples: 56 |
|
download_size: 104645 |
|
dataset_size: 201028 |
|
configs: |
|
- config_name: v1_2024 |
|
data_files: |
|
- split: test |
|
path: v1_2024/test-* |
|
- config_name: v1_2025 |
|
data_files: |
|
- split: test |
|
path: v1_2025/test-* |
|
--- |
|
<div align="center"> |
|
<h1>AetherCode: Evaluating LLMs' Ability to Win In Premier Programming Competitions</h1> |
|
</div> |
|
|
|
<div align="center" style="line-height: 1;"> |
|
<a href="https://arxiv.org/" target="_blank" style="margin: 2px;"> |
|
<img alt="Comming Soon" src="https://img.shields.io/badge/arXiv-Comming%20Soon-red?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/> |
|
</a> |
|
<a href="https://huggingface.co/datasets/m-a-p" target="_blank" style="margin: 2px;"> |
|
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-m--a--p-536af5" style="display: inline-block; vertical-align: middle;"/> |
|
</a> |
|
<a href="https://huggingface.co/datasets/m-a-p/AetherCode/blob/main/LICENSE" style="margin: 2px;"> |
|
<img alt="Dataset License" src="https://img.shields.io/badge/Dataset_License-CC--BY--4.0-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> |
|
</a> |
|
</div> |
|
|
|
## Introduction |
|
|
|
Competitive programming has emerged as a critical benchmark for evaluating the reasoning and coding capabilities of Large Language Models (LLMs). Despite impressive progress on existing benchmarks, we argue that current evaluations overstate model proficiency, masking a substantial gap between LLMs and elite human programmers. This gap arises from two key limitations: insufficient difficulty and scope of benchmark problems, and evaluation bias from low-quality test cases. To address these shortcomings, we present AetherCode, a new benchmark that draws problems from premier programming competitions such as IOI and ICPC, offering broader coverage and higher difficulty. AetherCode further incorporates comprehensive, expert-validated test suites built through a hybrid of automated generation and human curation, ensuring rigorous and reliable assessment. By combining challenging problem design with robust evaluation, AetherCode provides a more faithful measure of LLM capabilities and sets a new standard for future research in code reasoning. |
|
|
|
## Highlights |
|
|
|
**Problem Curation from Top-Tier Competitions**: AetherCode is the first benchmark to systematically collect problems from premier programming competitions worldwide, including the Olympiad in Informatics (OI) and the International Collegiate Programming Contest (ICPC). Our process involved a comprehensive collection, meticulous cleaning, and format conversion of problems from PDF to a Markdown+LaTeX structure. Each problem statement was manually proofread for correctness, and a team of competitive programming experts mannotated each problem with classification tags. |
|
|
|
**High-Quality Test Case Generation**: We developed a hybrid methodology, combining automated generation with expert annotation, to create high-quality test cases for every problem. We evaluated the correctness and comprehensiveness of our test cases by validating them against a large corpus of collected solutions, enforcing a standard of zero false positives and zero false negatives. |
|
|
|
## Quickstart |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
# Login using e.g. `huggingface-cli login` to access this dataset |
|
ds = load_dataset("m-a-p/AetherCode", "v1_2024") |
|
``` |
|
|
|
## License |
|
|
|
This project is licensed under CC-BY-4.0. See the [LICENSE file](https://huggingface.co/datasets/m-a-p/AetherCode/blob/main/LICENSE) for details. |
|
|