The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Dataset Card for GitTaskBench
Dataset Details
Dataset Description
GitTaskBench is a benchmark dataset designed to evaluate the capabilities of code-based intelligent agents in solving real-world tasks by leveraging GitHub repositories.
It contains 54 representative tasks across 7 domains, carefully curated to reflect real-world complexity and economic value. Each task is associated with a fixed GitHub repository to ensure reproducibility and fairness in evaluation.
- Curated by: QuantaAlpha Research Team
- Funded by [optional]: Not specified
- Shared by [optional]: GitTaskBench Team
- Language(s): Primarily English (task descriptions, documentation)
- License: [Specify license chosen, e.g.,
cc-by-nc-sa-4.0
]
Dataset Sources
- Repository: GitTaskBench GitHub
- Paper: arXiv:2508.18993
- Organization: Team Homepage
Uses
Direct Use
- Evaluating LLM-based agents (e.g., RepoMaster, SWE-Agent, Aider, OpenHands).
- Benchmarking repository-level reasoning and execution.
- Training/testing frameworks for real-world software engineering tasks.
Out-of-Scope Use
- Not intended for personal data processing.
- Not designed as a dataset for training NLP models directly.
- Not suitable for commercial applications requiring private/sensitive datasets.
Dataset Structure
- Tasks: 54 total, spanning 7 domains.
- Domains include:
- Image Processing
- Video Processing
- Speech Processing
- Physiological Signals Processing
- Security and Privacy
- Web Scraping
- Office Document Processing
Each task specifies:
- Input requirements (file types, formats).
- Output expectations.
- Evaluation metrics (task-specific, e.g., accuracy thresholds, PSNR for image quality, Hasler-Bülthoff metric for video).
Usage Example
You can easily load the dataset using the 🤗 Datasets library:
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("Nicole-Yi/GitTaskBench")
# Inspect the dataset structure
print(dataset)
# Access one task example
print(dataset["test"][0])
Example Output
DatasetDict({
train: Dataset({
features: ['task_id', 'domain', 'description', 'input_format', 'output_requirement', 'evaluation_metric'],
num_rows: 54
})
})
Each task entry contains:
- task_id: Unique task identifier (e.g.,
Trafilatura_01
) - domain: Task domain (e.g., Image Processing, Speech Processing, etc.)
- description: Natural language description of the task
- input_format: Expected input file type/format
- output_requirement: Required output specification
- evaluation_metric: Evaluation protocol and pass/fail criteria
Dataset Creation
Curation Rationale
Current agent benchmarks often lack real-world grounding. GitTaskBench fills this gap by focusing on practical, repository-driven tasks that mirror how developers solve real problems using GitHub projects.
Source Data
Data Collection and Processing
- Selected GitHub repositories that match strict criteria (stability, completeness, reproducibility).
- Curated real-world tasks mapped to fixed repositories.
- Defined consistent evaluation protocols across tasks.
Who are the source data producers?
- Source repositories come from open-source GitHub projects.
- Benchmark curated by QuantaAlpha team (researchers from CAS, Tsinghua, PKU, CMU, HKUST, etc.).
Annotations
- Task-specific evaluation metrics are provided as annotations.
- No human-labeled data annotations beyond benchmark definitions.
Personal and Sensitive Information
- Dataset does not include personally identifiable information.
- Repositories selected exclude sensitive or private data.
Bias, Risks, and Limitations
- Bias: Repository and task selection may reflect research biases toward specific domains.
- Risk: Benchmark assumes GitHub accessibility; tasks may be less relevant if repos change in future.
- Limitation: Tasks are curated and fixed; not all real-world cases are covered.
Recommendations
- Use this benchmarks for agent real-world evaluation.
- Ensure compliance with licensing before re-distribution.
Citation
If you use GitTaskBench, please cite the paper:
BibTeX:
@misc{ni2025gittaskbench,
title={GitTaskBench: A Benchmark for Code Agents Solving Real-World Tasks Through Code Repository Leveraging},
author={Ziyi Ni and Huacan Wang and Shuo Zhang and Shuo Lu and Ziyang He and Wang You and Zhenheng Tang and Yuntao Du and Bill Sun and Hongzhang Liu and Sen Hu and Ronghao Chen and Bo Li and Xin Li and Chen Hu and Binxing Jiao and Daxin Jiang and Pin Lyu},
year={2025},
eprint={2508.18993},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2508.18993},
}
More Information
- Maintainer: QuantaAlpha Research Team
- Contact: See GitTaskBench GitHub Issues
✨ Key Features:
- Multi-modal tasks (vision, speech, text, signals).
- Repository-level evaluation.
- Real-world relevance (PDF extraction, video coloring, speech analysis, etc.).
- Extensible design for new tasks.
- Downloads last month
- 32