File size: 7,997 Bytes
60163f9
 
 
 
 
 
af08ca8
 
 
 
 
3273254
af08ca8
 
3273254
 
af08ca8
 
3273254
af08ca8
 
3273254
af08ca8
3273254
 
 
af08ca8
 
3273254
 
af08ca8
 
3273254
af08ca8
 
3273254
 
 
 
 
60163f9
 
 
 
 
 
3fb9bbb
 
60163f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
---
license: mit
task_categories:
- text-to-image
language:
- en
configs:
- config_name: default
  data_files:
  - split: number_few
    path:
    - number/images/*_0-2_*.png
  - split: number_many
    path:
    - number/images/*_11-13_*.png
    - number/images/*_14-16_*.png
  - split: position_boundary
    path:
    - position/images/*_position_boundary_*.png
  - split: position_center
    path:
    - position/images/*_position_center_*.png
  - split: shape_horizontal
    path:
    - shape/images/*_H2W1_*.png
    - shape/images/*_H3W1_*.png
  - split: shape_vertical
    path:
    - shape/images/*_H1W2_*.png
    - shape/images/*_H1W3_*.png
  - split: size_tiny
    path:
    - size/images/*size_020_*.png
  - split: size_large
    path:
    - size/images/*size_090_*.png
    - size/images/*size_110_*.png
    - size/images/*size_130_*.png
    - size/images/*size_150_*.png
pretty_name: LayoutBench
---

# LayoutBench

Release of LayoutBench dataset from [Diagnostic Benchmark and Iterative Inpainting for Layout-Guided Image Generation (CVPR 2024 Workshop)](https://layoutbench.github.io/)

See also [LayoutBench-COCO](https://huggingface.co/datasets/j-min/layoutbench-coco) for zero-shot evaluation on OOD layouts with real objects.

[[Project Page](https://layoutbench.github.io/)]
[[Paper](https://arxiv.org/abs/2304.06671)]

Authors: 
[Jaemin Cho](https://j-min.io),
[Linjie Li](https://www.microsoft.com/en-us/research/people/linjli/),
[Zhengyuan Yang](https://zyang-ur.github.io/),
[Zhe Gan](https://zhegan27.github.io/),
[Lijuan Wang](https://www.microsoft.com/en-us/research/people/lijuanw/),
[Mohit Bansal](https://www.cs.unc.edu/~mbansal/)

## Summary

LayoutBench is a diagnostic benchmark that examines layout-guided image generation models on arbitrary, unseen layouts. LayoutBench consists of 8K images with 1K images per task:
- `number_few`
- `number_many`
- `position_center`
- `position_boundary`
- `size_tiny`
- `size_large`
- `shape_horizontal`
- `shape_vertical`.
  
We assume that the layout-to-image generation models were trained on [CLEVR](https://cs.stanford.edu/people/jcjohns/clevr/) dataset (in-distribution). Then we evaluate the models on LayoutBench (out-of-distribution).
Below we compare CLEVR and LayoutBench examples.

![CLEVR vs LayoutBench](./assets/CLEVR_vs_LayoutBench.png)

## How was it created?

To disentangle spatial control from other aspects in image generation, such as generating diverse objects, LayoutBench keeps the object configurations of [CLEVR](https://cs.stanford.edu/people/jcjohns/clevr/) whose objects have 3 shapes, 2 materials, and 8 colors (48 combinations in total), and changes the spatial layouts.
Images in LayoutBench are collected in two steps:
- (1) sample scenes for each skill, where a scene
is defined by the objects and their positions
- (2) render images from the scenes with [Blender](https://www.blender.org/) simulator (2.93.13) and obtain bounding box layouts.


## Skill Details

We measure 4 spatial control skills (number, position, size, shape), where each skill consists of 2 OOD layout splits, i.e., in total 8 tasks = 4 skills x 2 splits. 
In total, we collect 8K images for LayoutBench evaluation, with 1K images per task.

### Skill 1: Number.

This skill involves generating images with a specified number of objects. In contrast to the ID CLEVR images with 3∼10 objects, we evaluate models on two OOD splits:
- (1) few: images with 0∼2 objects
- (2) many: images with 11∼16 objects.

###  Skill 2: Position.

This skill involves generating images with objects placed at specific positions. Different from ID CLEVR images featuring evenly distributed object position without much occlusion between objects, we design two OOD splits:
- (1) center: objects are placed at the center, thus leading to more occlusions
- (2) boundary: objects are only placed on boundaries (top/bottom/left/right).

###  Skill 3: Size.

This skill involves generating images with objects of a specified size. We construct two OOD splits:
- (1) tiny: objects with scale 2
- (2) large: objects with scale {9, 11, 13, 15}.

In comparison, the objects in CLEVR images have only two scales {3.5, 7}. We use 3∼5 objects for this skill, as we find that using more than this number of large objects can often obstruct the object visibilities.

###  Skill 4: Shape.

This skill involves generating images with objects of a specified aspect ratio. As the objects in CLEVR images mostly have square aspect ratios, we evaluate models with two OOD splits:
- (1) horizontal: objects in which one of the horizontal (x/y) axes are 2 or 3 times longer than the other axis, leading to object bounding boxes with an aspect ratio (width:height) of 2:1 or 3:1
- (2) vertical: objects whose vertical (z) axis are 2 or 3 times longer than horizontal (x/y) axes, resulting in object bounding boxes with an aspect ratio of 1:2 or 1:3. We use 3∼5 objects for this skill, as we find that using more than this number of objects can often obstruct the object visibilities.

# Use of LayoutBench

## 1) Train your model on CLEVR dataset

## 2) Evaluate your model on LayoutBench main splits (4 skills x 2 splits = 8 tasks)

![Eval overview](./assets/task_overview.png)

We test the OOD layout skills of layout-guided image generation models trained on CLEVR (ID) dataset. First, we generate images with LayoutBench (OOD) layouts. Then, we detect the objects from the generated images, and calculate the layout accuracy in average precision (AP), with an object detector. Please see [https://github.com/j-min/LayoutBench](https://github.com/j-min/LayoutBench) for evaluation guideline with pretrained DETR.

## 3) (optional) Fine-grained evaluation

As described in Sec 5.3 in the paper, we also provide fine-grained evaluation splits for each skill. Specifically, we divide the 4 skills into more fine-grained splits to cover both in-distribution (ID; CLEVR configurations) and out-of-distribution (OOD; LayoutBench configurations) examples. We sample 200 images for each split and report layout accuracy.

# Dataset File Structure

For each skill, we provide the following files:

- `scene files`: created for image rendering with Blender simulator. Each scene file includes the object configurations and their positions.
- `images`: rendered images from the scenes.
- `scene files in COCO format`: scene files converted into [COCO format](https://cocodataset.org/#format-data) for evaluation.

The dataset file structure is as follows:

```bash
number/
    # layout metadata for main splits (1K each)
    scenes_number_few.json
    scenes_number_many.json

    # (optional - for fine-grained evalutation - see Sec 5.3 in the paper for more details)
    # 200 scenes for each sub-split
    # (0-2 / 11-13 / 14-16 are parts of few/many, and 3-5 / 6-8 / 9-10 were additionally generated as CLEVR dataset has 3-10 objects, so there are 2 splits x 1000 images + 200 x 3 extra sub-splits = 2600 images in total)
    scenes_number_0-2_200.json
    scenes_number_3-5_200.json
    ....
    scenes_number_14-16_200.json

    scenes.json # the file that includes the whole scenes

    # actual images
    images/ 
        LayoutBench_val_number_0-2_000000.png
        ...
        LayoutBench_val_number_14-16_002599.png

    # scene files converted into COCO format for evaluation
    coco/
        # for main splits
        scenes_number_few_coco.json
        scenes_number_many_coco.json

        # for fine-grained analysis
        scenes_number_0-2_200_coco.json
        scenes_number_14-16_200_coco.json

# same structure for other skills

position/

shape/

size/
```



## Citation

```bibtex
@inproceedings{Cho2024LayoutBench,
  author    = {Jaemin Cho and Linjie Li and Zhengyuan Yang and Zhe Gan and Lijuan Wang and Mohit Bansal},
  title     = {Diagnostic Benchmark and Iterative Inpainting for Layout-Guided Image Generation},
  booktitle = {The First Workshop on the Evaluation of Generative Foundation Models},
  year      = {2024},
}
```