Datasets:
File size: 3,187 Bytes
7e89a02 c448bc9 7e89a02 cb6ea0d f72b89e 7e89a02 63b537f 7e89a02 63b537f 7e89a02 63b537f cb6ea0d 4ffc70a 9a5fbc5 4ffc70a 9a5fbc5 8582fc2 9a5fbc5 f72b89e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
license: cc-by-nc-4.0
language:
- en
tags:
- shadow
- controllable
- synthetic
pretty_name: Controllable shadow generation benchmark
size_categories:
- 1K<n<10K
---
# Overview
This is the public synthetic test set for controllable shadow generation created by Jasper Research Team. The project page for the research introduced this dataset is available at [this link](https://gojasper.github.io/controllable-shadow-generation-project/).
We created this dataset using [Blender](https://www.blender.org/). It has 3 tracks: softness control, horizontal direction control and vertical direction control.
Example renders from the dataset below:
## Softness control:

## Horizontal direction control:

## Vertical direction control:

# Usage
The dataset is formatted to be used with [WebDataset](https://huggingface.co/docs/hub/datasets-webdataset).
```python
import matplotlib.pyplot as plt
import webdataset as wds
# Create a data iterator
url = f"pipe:curl -s -L https://huggingface.co/datasets/jasperai/controllable-shadow-generation-benchmark/blob/main/controllable-shadow-generation-benchmark.tar"
data_iter = iter(wds.WebDataset(url))
# Sample from the dataset
data = next(data_iter)
# Visualize the image, object mask, and object shadow
_, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(data['image.png'])
axs[0].set_title('Image')
axs[1].imshow(data['mask.png'])
axs[1].set_title('Mask')
axs[2].imshow(data['shadow.png'])
axs[2].set_title('Shadow')
# Print the metadata
print(data['metadata.json'])
```
Example output:

Example metadata:
```python
{
'track': 'softness_control', # Which track the image belongs to
'light_energy': 1000, # Energy of the area light
'size': 2, # Size of the area light
'theta': 30.0, # Polar coodinate of the area light
'phi': 0.0, # Azimuthal coodinate of the area light
'r': 8.0, # Radius of the sphere
'light_location': '4.0,0.0,6.928203105926514', # Cartesian coordinate of the area light
'samples': 512, # We use Cycle rendering engine in Blender when creating the dataset.
# This parameter shows # of samples used by Cycle when rendering the image.
'resolution_x': 1024, # Width of the image.
'resolution_y': 1024 # Height of the image.
}
```
# Bibtex
If you use this dataset, please consider citing our paper:
```
@misc{
title={Controllable Shadow Generation with Single-Step Diffusion Models from Synthetic Data},
author={Tasar, Onur and Chadebec, Clement and Aubin, Benjamin},
year={2024},
eprint={2412.11972},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |