sahancpal commited on
Commit
b3c330f
·
verified ·
1 Parent(s): 41f5b43

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -65
README.md CHANGED
@@ -10,10 +10,131 @@ configs:
10
  path: "operator_input_models_mapping.parquet"
11
  ---
12
 
13
- # Understanding Trace Files in BackendBench
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  ## Format
16
- Trace files capture PyTorch operations and their arguments from real model executions:
 
17
 
18
  ```
19
  Operator: operation_name
@@ -25,6 +146,7 @@ cnt: count, serialized_arguments
25
  ## Structure
26
 
27
  **Operator line**: Specifies the PyTorch operation
 
28
  ```
29
  Operator: aten.add.Tensor
30
  Operator: aten.relu.default
@@ -32,6 +154,7 @@ Operator: aten.linear.default
32
  ```
33
 
34
  **Count lines**: Show how often each argument combination was used
 
35
  ```
36
  cnt: 42, ((T([10, 20], f16), T([10, 20], f16)), {})
37
  cnt: 0, ((T([5, 5], f32), T([5, 5], f32)), {})
@@ -39,13 +162,16 @@ cnt: 0, ((T([5, 5], f32), T([5, 5], f32)), {})
39
 
40
  ## Reading Count Lines
41
 
42
- **Count `42`**: This argument combination appeared 42 times in traced models
 
43
  - **`cnt: 0`** = Synthetic/generated arguments (not from real models)
 
44
  - **`cnt: >0`** = Real usage frequency from model traces
 
45
 
46
- **Arguments**: Same format as serialized arguments - `((args), {kwargs})`
47
 
48
- ## Complete Example
49
 
50
  ```
51
  Operator: aten.add.Tensor
@@ -58,67 +184,18 @@ cnt: 234, ((T([64, 256], f16),), {})
58
  ```
59
 
60
  This shows:
61
- - `aten.add.Tensor` called 156 times with 1×512×768 tensors
62
- - Same operation called 89 times with 32×128 tensors
63
- - One synthetic test case (cnt: 0)
64
- - `aten.relu.default` called 234 times with 64×256 tensor
65
-
66
- ## Interpretation
67
- Trace files provide real-world operation usage patterns, showing which tensor shapes and operations are most common in actual PyTorch models. These are fairly useful for debugging.
68
-
69
- **Note: These may be deprecated in the future, but are described as they are currently included in the dataset / codebase.**
70
-
71
-
72
- # Understanding Serialized Arguments in BackendBench
73
- ## Format
74
- BackendBench stores function arguments as strings containing all parameters needed to reproduce PyTorch operations:
75
-
76
- ```
77
- ((arg1, arg2, ...), {'key1': val1, 'key2': val2})
78
- ```
79
-
80
- ## Tensor Representation
81
- Tensors use the format `T([shape], dtype)` or `T([shape], dtype, [stride])`:
82
-
83
- ```python
84
- T([10, 20], f32) # 10×20 float32 tensor
85
- T([1, 512, 768], f16) # 1×512×768 float16 tensor
86
- T([64], i32) # 64-element int32 vector
87
- ```
88
 
89
- **Data types**: `f16/f32/f64` (float), `bf16` (bfloat16), `i32/i64` (int), `b8` (bool)
90
-
91
- ## Complete Examples
92
-
93
- **Single tensor argument:**
94
- ```python
95
- ((T([48, 24, 28, 28], f16),), {})
96
- ```
97
- = Function called with one 48×24×28×28 float16 tensor, no keyword arguments
98
-
99
- **Multiple tensors:**
100
- ```python
101
- ((T([8, 8, 8, 8, 8], f16), T([8, 8, 8, 8, 8], f16)), {})
102
- ```
103
- = Function with two identical 5D tensors
104
-
105
- **Mixed arguments:**
106
- ```python
107
- ((T([128, 256], f16), [1024, 249, 249]), {'dtype': torch.float16, 'device': 'cuda'})
108
- ```
109
- = Function with tensor, list, and keyword arguments
110
 
111
- **Complex nested:**
112
- ```python
113
- (([T([5, 5], f32), T([3, 3], i64), 42],), {'weight': T([3, 3], f32)})
114
- ```
115
- = Function with list containing tensors and numbers, plus tensor keyword argument
116
 
117
- ## Argument Types
118
- - **Tensors**: `T([shape], dtype)` format
119
- - **Lists**: `[item1, item2, ...]` (can contain tensors)
120
- - **Primitives**: `42`, `'hello'`, `True`, `None`
121
- - **PyTorch objects**: `torch.float16`, `torch.strided`
122
 
123
- ## Acknowledgements
124
- We are extremely grateful for the folks working on [TritonBench](https://github.com/pytorch-labs/tritonbench/tree/main) for these traces and intuitive format
 
10
  path: "operator_input_models_mapping.parquet"
11
  ---
12
 
13
+ # TorchBench
14
+
15
+ The TorchBench suite of [BackendBench](https://github.com/meta-pytorch/BackendBench) is designed to mimic real-world use cases. It provides operators and inputs derived from 155 model traces found in [TIMM](https://huggingface.co/timm) (67), [Hugging Face Transformers](https://huggingface.co/docs/transformers/en/index) (45), and [TorchBench](https://github.com/pytorch/benchmark) (43). (These are also the traces PyTorch developers use to [validate operators](https://hud.pytorch.org/benchmark/compilers).) You can view the origin of these traces by switching the subset in the dataset viewer to `ops_traces_models`.
16
+
17
+ When running BackendBench, much of the extra information about what you are testing is abstracted away, so you can simply run `uv run python --suite torchbench ...`. Here, however, we provide the test suite as a dataset that can be explored directly. It includes details about why certain operations and arguments were included or excluded, reflecting the careful consideration behind curating the set.
18
+
19
+ You can download the dataset in either format:
20
+
21
+ - `backend_bench_problems.parquet` (default format on Hugging Face)
22
+
23
+ - `backend_bench_problems.json` (more human-readable)
24
+
25
+
26
+ ### Fields
27
+
28
+ - **uuid** – Unique identifier for the `(op_name, args)` pair.
29
+
30
+ - **op_name** – Full name of the operator being tested.
31
+
32
+ - **args** – Serialized form of the inputs from the trace. [See details below](#understanding-serialized-arguments-in-backendbench).
33
+
34
+ - **runnable** – Whether the operator is runnable in BackendBench (some are not yet supported).
35
+
36
+ - **included_in_benchmark** – Whether this `(op_name, args)` pair is tested in the TorchBench suite.
37
+
38
+ - **why_excluded** – If not included, a list of reasons for exclusion (e.g., "BackendBench does not support correctness testing for random ops yet," "BackendBench does not support correctness testing for tensor creation and manipulation ops yet").
39
+
40
+ - **is_synthetic** – Marks synthetically generated inputs (e.g., very large tensors). These are currently excluded from the benchmark.
41
+
42
+ - **runtime_ms** – Execution time (ms) on our hardware (single GPU from a machine with 8× H100s and an AMD EPYC 9654 96-core processor).
43
+
44
+ - **relative_runtime_to_kernel_launch** – `runtime_ms` divided by the runtime of a dummy CUDA op (`torch.empty(0, device=cuda)`), representing launch overhead.
45
+
46
+ - **is_overhead_dominated_op** – Flags operator/argument pairs running close to CUDA overhead as “performance canaries.” [Histogram analysis](https://github.com/meta-pytorch/BackendBench/issues/108) showed that a 1.3× threshold above CUDA overhead is a useful cutoff. These tests can be run for sanity-checking kernels with `uv run python --suite torchbench --check-overhead-dominated-ops ...`.
47
+
48
+ - **count** – Number of times this operator/input pair appeared in model traces.
49
+
50
+ - **in_models** – List of models (from real-world traces) where this operator/input pair appears.
51
+
52
+ - **in_models_count** – Number of distinct models in which this operator/input pair occurs.
53
+
54
+
55
+ # Serialized Arguments in BackendBench
56
+
57
+ Generally, arguments are serialized by storing tensor shapes and preserving everything else as it's fairly intuitive. For example:
58
+
59
+ `((T([8, 8, 8, 8, 8], f16), T([8, 8, 8, 8, 8], f16)), {})`
60
+
61
+ Below we'll go into detail about the format for rigor.
62
+ ## Format
63
+
64
+ BackendBench stores function arguments as strings with all parameters needed to reproduce PyTorch operations:
65
+
66
+ ```python
67
+ ((arg1, arg2, ...), {'key1': val1, 'key2': val2})
68
+ ```
69
+
70
+ ```python
71
+ (([T([5, 5], f32), T([3, 3], i64), 42],), {'weight': T([3, 3], f32)})
72
+ ```
73
+
74
+ ## Tensor Representation
75
+
76
+ Tensors use the format `T([shape], dtype)` or `T([shape], dtype, [stride])`:
77
+
78
+ ```python
79
+ T([10, 20], f32) # 10×20 float32 tensor
80
+ T([1, 512, 768], f16) # 1×512×768 float16 tensor
81
+ T([64], i32) # 64-element int32 vector
82
+ ```
83
+
84
+ **Data types**: `f16/f32/f64` (float), `bf16` (bfloat16), `i32/i64` (int), `b8` (bool)
85
+
86
+ ## Examples
87
+
88
+ **Single tensor argument:**
89
+
90
+ ```python
91
+ ((T([48, 24, 28, 28], f16),), {})
92
+ ```
93
+
94
+ 48×24×28×28 float16 tensor, no keyword arguments
95
+
96
+ **Multiple tensors:**
97
+
98
+ ```python
99
+ ((T([8, 8, 8, 8, 8], f16), T([8, 8, 8, 8, 8], f16)), {})
100
+ ```
101
+
102
+ Two 5D tensors of identical shapes
103
+
104
+ **Mixed arguments:**
105
+
106
+ ```python
107
+ ((T([128, 256], f16), [1024, 249, 249]), {'dtype': torch.float16, 'device': 'cuda'})
108
+ ```
109
+
110
+ Args are a tensor, list, and keyword arguments
111
+
112
+ **Complex nested:**
113
+
114
+ ```python
115
+ (([T([5, 5], f32), T([3, 3], i64), 42],), {'weight': T([3, 3], f32)})
116
+ ```
117
+
118
+ List containing tensors and numbers, plus tensor keyword argument
119
+
120
+ ## Argument Types
121
+
122
+ - **Tensors**: `T([shape], dtype)`
123
+
124
+ - **Lists**: `[item1, item2, ...]` (can contain tensors)
125
+
126
+ - **Primitives**: `42`, `'hello'`, `True`, `None`
127
+
128
+ - **PyTorch objects**: `torch.float16`, `torch.strided`
129
+
130
+
131
+ # Trace Files in BackendBench
132
+
133
+ This repository includes `.txt` trace files, which were the original output format of model traces and are used to compose the dataset. Here’s their structure:
134
 
135
  ## Format
136
+
137
+ Trace files capture PyTorch operations and arguments from real model executions:
138
 
139
  ```
140
  Operator: operation_name
 
146
  ## Structure
147
 
148
  **Operator line**: Specifies the PyTorch operation
149
+
150
  ```
151
  Operator: aten.add.Tensor
152
  Operator: aten.relu.default
 
154
  ```
155
 
156
  **Count lines**: Show how often each argument combination was used
157
+
158
  ```
159
  cnt: 42, ((T([10, 20], f16), T([10, 20], f16)), {})
160
  cnt: 0, ((T([5, 5], f32), T([5, 5], f32)), {})
 
162
 
163
  ## Reading Count Lines
164
 
165
+ - **Count `42`**: Argument combination appeared 42 times in traced models
166
+
167
  - **`cnt: 0`** = Synthetic/generated arguments (not from real models)
168
+
169
  - **`cnt: >0`** = Real usage frequency from model traces
170
+
171
 
172
+ **Arguments**: Same format as serialized arguments `((args), {kwargs})`
173
 
174
+ ## Example
175
 
176
  ```
177
  Operator: aten.add.Tensor
 
184
  ```
185
 
186
  This shows:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
187
 
188
+ - `aten.add.Tensor` called 156 times with 1×512×768 tensors
189
+
190
+ - Same operation called 89 times with 32×128 tensors
191
+
192
+ - One synthetic test case (`cnt: 0`)
193
+
194
+ - `aten.relu.default` called 234 times with a 64×256 tensor
195
+
 
 
 
 
 
 
 
 
 
 
 
 
 
196
 
197
+ **Note: Traces may be deprecated in the future, but are described here as they are currently included in the dataset/codebase.**
 
 
 
 
198
 
199
+ # Acknowledgements
 
 
 
 
200
 
201
+ We are extremely grateful to the [TritonBench](https://github.com/pytorch-labs/tritonbench/tree/main) team for these traces and their intuitive format.