Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,9 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
task_categories:
|
4 |
- text-generation
|
5 |
language:
|
6 |
-
- code
|
7 |
tags:
|
8 |
- CodeGen
|
9 |
- LLVM
|
@@ -11,171 +11,108 @@ tags:
|
|
11 |
pretty_name: IR-Level Compiler Optimization Dataset
|
12 |
size_categories:
|
13 |
- 100K<n<1M
|
14 |
-
|
15 |
---
|
16 |
|
17 |
-
# Dataset Card for
|
18 |
-
|
19 |
-
<!-- Provide a quick summary of the dataset. -->
|
20 |
|
21 |
-
|
22 |
|
|
|
23 |
|
24 |
## Dataset Details
|
25 |
|
26 |
-
###
|
27 |
-
|
28 |
-
<!-- Provide a longer summary of what this dataset is. -->
|
29 |
-
|
30 |
-
- **Curated by:**
|
31 |
-
- **Funded by [optional]:**
|
32 |
-
- **Shared by [optional]:**
|
33 |
-
- **Languages:** C, C++
|
34 |
-
- **License:** GPL-3.0
|
35 |
-
|
36 |
-
### Dataset Sources [optional]
|
37 |
-
|
38 |
-
<!-- Provide the basic links for the dataset. -->
|
39 |
-
|
40 |
-
- **Repository:** https://huggingface.co/datasets/YangziResearch/ComOpti
|
41 |
-
- **Paper [optional]:**
|
42 |
-
- **Demo [optional]:**
|
43 |
-
|
44 |
-
## Uses
|
45 |
|
46 |
-
|
|
|
|
|
47 |
|
48 |
-
###
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
55 |
-
2. Optimization Analysis: Analyzing optimization behaviors in IR, assessing the effectiveness of different optimization strategies.
|
56 |
-
3. Optimized Code Generation: Generating new IR variants based on optimization strategies.
|
57 |
|
|
|
58 |
|
59 |
-
|
|
|
|
|
60 |
|
61 |
-
|
62 |
|
63 |
-
|
64 |
|
65 |
-
|
66 |
-
2. Unauthorized commercial use: Using the dataset for unauthorized commercial activities may violate the licenses of the original code.
|
67 |
|
68 |
## Dataset Structure
|
69 |
|
70 |
-
|
71 |
-
|
72 |
-
Each sample includes the following files:
|
73 |
-
|
74 |
-
- **base.ll**: The unoptimized LLVM IR, generated by `clang/clang++ -Xclang -disable-llvm-passes -O3`.
|
75 |
-
- **preprocessed.ll**: We offer a simple preprocessed version of the base IR.
|
76 |
-
- **O3.ll**: The O3 version generated by `opt -passes='default<O3>'`
|
77 |
-
- **Os.ll**: The Os version generated by `opt -passes='default<Os>'`
|
78 |
-
- **effect_passes**: A list of effective optimization passes.
|
79 |
-
- **Repo**: The repo where the High Level Language sources are collected.
|
80 |
-
- **License**: License info affliated with the repo.
|
81 |
-
|
82 |
-
## Dataset Creation
|
83 |
-
|
84 |
-
### Curation Rationale
|
85 |
-
|
86 |
-
<!-- Motivation for the creation of this dataset. -->
|
87 |
-
|
88 |
-
The creation of ComOpti aims to fill the gap in IR-level compiler optimization research, providing a high-quality dataset for training and evaluation to advance automated optimization techniques.
|
89 |
-
|
90 |
-
### Source Data
|
91 |
-
|
92 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
93 |
-
|
94 |
-
#### Data Collection and Processing
|
95 |
-
|
96 |
-
The dataset is sourced from:
|
97 |
-
|
98 |
-
- XXXX github
|
99 |
-
- XXXX gitlab
|
100 |
-
|
101 |
-
with 8 categories:
|
102 |
-
|
103 |
-
1. XXXXX
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
We use the following preprocessing utils and ensure they are grammarly equivelent to the original IR.
|
108 |
-
|
109 |
-
- Local Variable Renaming
|
110 |
-
- XXXX
|
111 |
-
-
|
112 |
-
|
113 |
-
#### Who are the source data producers?
|
114 |
-
|
115 |
-
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
116 |
-
|
117 |
-
The data was collected and processed by [Your Name or Team].
|
118 |
|
119 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
120 |
|
121 |
-
|
122 |
|
123 |
-
|
124 |
|
125 |
-
|
126 |
-
|
127 |
-
[More Information Needed]
|
128 |
-
|
129 |
-
#### Who are the annotators?
|
130 |
-
|
131 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
132 |
-
|
133 |
-
[More Information Needed]
|
134 |
-
|
135 |
-
#### Personal and Sensitive Information
|
136 |
-
|
137 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Bias, Risks, and Limitations
|
142 |
-
|
143 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
144 |
-
|
145 |
-
[More Information Needed]
|
146 |
-
|
147 |
-
### Recommendations
|
148 |
-
|
149 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
150 |
-
|
151 |
-
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
|
152 |
-
|
153 |
-
## Citation [optional]
|
154 |
-
|
155 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
156 |
-
|
157 |
-
**BibTeX:**
|
158 |
-
|
159 |
-
[More Information Needed]
|
160 |
-
|
161 |
-
**APA:**
|
162 |
-
|
163 |
-
[More Information Needed]
|
164 |
-
|
165 |
-
## Glossary [optional]
|
166 |
-
|
167 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## More Information [optional]
|
172 |
-
|
173 |
-
[More Information Needed]
|
174 |
-
|
175 |
-
## Dataset Card Authors [optional]
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
## Dataset Card Contact
|
180 |
|
181 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-4.0
|
3 |
task_categories:
|
4 |
- text-generation
|
5 |
language:
|
6 |
+
- code
|
7 |
tags:
|
8 |
- CodeGen
|
9 |
- LLVM
|
|
|
11 |
pretty_name: IR-Level Compiler Optimization Dataset
|
12 |
size_categories:
|
13 |
- 100K<n<1M
|
|
|
14 |
---
|
15 |
|
16 |
+
# Dataset Card for IROpti
|
|
|
|
|
17 |
|
18 |
+
**IROpti** is a publicly available dataset designed to advance the use of large language models (LLMs) in compiler optimization. It leverages LLVM—one of the most widely adopted modern compilers—and focuses on its intermediate representation (IR) as the foundation of the dataset. IROpti contains 170,000 IR samples curated from 1,704 GitHub repositories across diverse domains. The dataset provides a comprehensive resource for training and evaluating models in the field of IR-level compiler optimizations.
|
19 |
|
20 |
+
------
|
21 |
|
22 |
## Dataset Details
|
23 |
|
24 |
+
### Description
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
+
- **Languages**: LLVM Intermediate Representation (LLVM IR)
|
27 |
+
- **Size**: ~170,000 IR samples
|
28 |
+
- **Optimization Behaviors**: >4.3 million annotations
|
29 |
|
30 |
+
### Source
|
31 |
|
32 |
+
- **Repositories**: 1,704 open-source GitHub repositories
|
33 |
+
- **Link**: [IROpti on Hugging Face](https://huggingface.co/datasets/YangziResearch/IROpti)
|
34 |
|
35 |
+
------
|
36 |
|
37 |
+
## Intended Uses
|
|
|
|
|
38 |
|
39 |
+
IROpti is suitable for the following use cases:
|
40 |
|
41 |
+
1. **IR Understanding**: Train models to extract structural and semantic information from LLVM IR code.
|
42 |
+
2. **Optimization Behavior Analysis**: Evaluate model ability to capture and apply real-world compiler optimizations.
|
43 |
+
3. **Optimized Code Generation**: Use LLMs to generate optimized IR from unoptimized input.
|
44 |
|
45 |
+
### Out-of-Scope Uses
|
46 |
|
47 |
+
- **Non-Compiler Tasks**: The dataset is specifically tailored to IR-level optimization tasks and may not generalize well to unrelated domains.
|
48 |
|
49 |
+
------
|
|
|
50 |
|
51 |
## Dataset Structure
|
52 |
|
53 |
+
Each record in IROpti includes:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
|
55 |
+
- `original_ir`: Unoptimized LLVM IR (via `clang/clang++ -Xclang -disable-llvm-passes`)
|
56 |
+
- `preprocessed_ir`: Cleaned version of the original IR
|
57 |
+
- `o3_ir`: Optimized IR generated via `opt -passes='default<O3>'`
|
58 |
+
- `o3_active_passes`: Effective passes during O3 compiling
|
59 |
+
- `structural_hash`: Unique structurehash of the IR, via `opt -passes='print<structural-hash><detailed>`
|
60 |
+
- `repo_name`: Source GitHub repository
|
61 |
+
- `repo_file_path`: File path within the repository
|
62 |
+
- `function_name`: IR function name
|
63 |
+
- `repo_license`: Associated license
|
64 |
|
65 |
+
> Each `.parquet` file contains ~20,000 examples.
|
66 |
|
67 |
+
------
|
68 |
|
69 |
+
## Dataset Creation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
|
71 |
+
### Collection and Preprocessing
|
72 |
+
|
73 |
+
- **Selection**: Repositories were manually filtered for compilation relevance and diversity across domains.
|
74 |
+
- **Preprocessing**:
|
75 |
+
- Local variable renaming
|
76 |
+
- Basic block renaming
|
77 |
+
- Struct name normalization
|
78 |
+
- Whitespace and comment removal
|
79 |
+
- Insertion of missing basic block headers where needed
|
80 |
+
|
81 |
+
All transformations preserve the semantic and structural fidelity of the IR.
|
82 |
+
|
83 |
+
- **Domains**:
|
84 |
+
- High-Performance Computing (HPC)
|
85 |
+
- Machine Learning
|
86 |
+
- Multimedia
|
87 |
+
- Embedded Systems
|
88 |
+
- System Software
|
89 |
+
- Security
|
90 |
+
- Reusable Libraries
|
91 |
+
- Algorithms
|
92 |
+
|
93 |
+
| Domain | Description | #Repos | #LLVM IR | #Opt. Behaviors | Avg. Eff. Opt. Steps |
|
94 |
+
| -------------------------------- | ------------------------------------------------------------ | ------ | -------- | --------------- | -------------------- |
|
95 |
+
| High-Performance Computing (HPC) | Loop-intensive, memory-bound workloads; key targets for vectorization, parallelism, and memory locality optimizations. | 275 | 17,145 | 399,110 | 23.28 |
|
96 |
+
| Machine Learning | Compute-bound code; used for parallel execution. | 95 | 9,366 | 249,467 | 26.64 |
|
97 |
+
| Multimedia | SIMD-intensive, throughput-sensitive workloads; targets loop optimizations. | 174 | 15,019 | 338,699 | 22.55 |
|
98 |
+
| Embedded Systems | Resource-constrained (size, energy, real-time); compilers optimize for minimal footprint and energy efficiency. | 108 | 5,942 | 129,449 | 21.79 |
|
99 |
+
| System Software | Includes OS kernels, runtime systems, allocators; requires low-level control, safety, and instruction-level optimization. | 93 | 6,581 | 136,688 | 20.77 |
|
100 |
+
| Security | Demands high performance and strict constant-time behavior; requires careful register use, vectorization, and avoidance of side-channel vulnerabilities during optimization. | 94 | 9,252 | 184,329 | 19.92 |
|
101 |
+
| Reusable Libraries | Common code reused across domains; compilers apply inlining, and target-specific tuning. | 106 | 7,664 | 157,162 | 20.51 |
|
102 |
+
| Algorithms | Classical algorithms (e.g., sorting); compiler optimizations target computation and memory. | 759 | 99,595 | 2,754,505 | 27.66 |
|
103 |
+
| **Total** | - | 1,704 | 170,564 | 4,349,409 | **Avg: 22.89** |
|
104 |
+
|
105 |
+
## Citation
|
106 |
+
|
107 |
+
**BibTeX**
|
108 |
+
|
109 |
+
```
|
110 |
+
@misc{iropti2025,
|
111 |
+
title={IROpti: Enhancing LLMs to Understand and Perform IR-level Optimizations in Compilers},
|
112 |
+
author={Zi Yang and Lei Qiu and Fang Lyu and Ming Zhong and Zhilei Chai and Haojie Zhou and Huimin Cui and Xiaobing Feng},
|
113 |
+
year={2025},
|
114 |
+
url={https://huggingface.co/datasets/YangziResearch/IROpti}
|
115 |
+
}
|
116 |
+
```
|
117 |
+
|
118 |
+
> We also provide an open-source toolchain for building similar datasets. If you're interested in generating your own optimization corpus, feel free to use our tools.
|