Datasets:
File size: 6,758 Bytes
9fb2e3d 2be7ca8 9fb2e3d 82ee4f6 d602684 82ee4f6 2be7ca8 82ee4f6 2be7ca8 82ee4f6 c7649c0 2be7ca8 c7649c0 82ee4f6 c7649c0 82ee4f6 2be7ca8 82ee4f6 3f586fe d602684 82ee4f6 7fc9ace 82ee4f6 3f586fe d602684 82ee4f6 2be7ca8 82ee4f6 2be7ca8 82ee4f6 2be7ca8 82ee4f6 2be7ca8 7fc9ace 82ee4f6 7fc9ace 82ee4f6 2be7ca8 82ee4f6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 |
---
pretty_name: AIGI Inpainting Robustness
dataset_name: aigi-inpainting-robustness
tags:
- ai-generated-images
- inpainting
- robustness
- diffusion
- watermarking
license: cc-by-nc-4.0
task_categories:
- image-classification
- other
size_categories:
- 1K<n<10K
homepage: https://huggingface.co/datasets/eoguzakin/Robustness of AI-Generated Image Detection Against Localized Inpainting Attacks
---
# Robustness of AI-Generated Image Detection Against Localized Inpainting Attacks
This repository hosts the **detector-ready datasets** and **mask packs** used in the thesis:
> **Robustness of AI-Generated Image Detection Against Localized Inpainting Attacks**
> *Oguz Akin, Saarland University, CISPA Helmholtz Center for Information Security (2025)*
It provides standardized evaluation splits for six state-of-the-art AI-generated image (AIGI) detectors across **watermarking, passive, and training-free paradigms**, tested under **LaMa** and **ZITS** inpainting attacks.
Everything is packaged as `.tar.xz` archives to ensure reproducibility and easy transfer.
---
## 📂 Repository Structure
.
├─ detectors/
│ ├─ ufd_datasets.tar.xz
│ ├─ dimd_datasets.tar.xz
│ ├─ dire_datasets.tar.xz
│ ├─ aeroblade_datasets.tar.xz
│ ├─ stablesig_datasets.tar.xz
│ └─ treering_datasets.tar.xz
├─ masks/
│ ├─ masks_stablesig.tar.xz
│ └─ masks_treering_wm.tar.xz
└─ checksums.sha256
- **detectors/** — per-detector dataset “views,” already resized/re-encoded into the formats expected by each model.
- **masks/** — random-rectangle and random-blob object masks (area-binned), used to generate inpainting attacks.
- **checksums.sha256** — SHA-256 integrity hashes for all archives.
---
## 🔎 Dataset Details
### Detector Views
Each archive expands into the exact layout expected by that detector. All splits contain **200 images per subset** (balanced).
Typical layout:
baseline/
reals/
fakes/
[fakes_inpainted_lama/, fakes_inpainted_zits/]
robustness/
inpainted_lama/
randrect/
randblob_bins/bin{1..4}/
inpainted_zits/
randrect/
randblob_bins/bin{1..4}/
[reals_inpainted/]
---
### Detector Input Handling
**On disk:** All datasets are stored with their preprocessed versions for each detector to match their original paper/training setup.
- **UFD → 224**
(Resized + center-cropped to 224×224, CLIP normalization.)
- **DIMD → JPEG-256**
(Resized to 256×256, with JPEG round-trip to mimic training distribution.)
- **DIRE → 256**
(Resized to 256×256, matching the ADM ImageNet-256 diffusion prior.)
- **AEROBLADE / StableSig / Tree-Ring → 512**
(All evaluated directly at 512×512 without JPEG compression.)
> **Why this split?** To eliminate the effect of compression or size on classification, ensuring scientifically fair evaluation.
---
### Mask Packs
- **masks_stablesig.tar.xz**
- **masks_treering_wm.tar.xz**
Contain **random rectangle** and **random blob masks**, binned by area ratio:
- `bin1_0-3` → 0–3% of image area
- `bin2_3-10` → 3–10%
- `bin3_10-25` → 10–25%
- `bin4_25-40` → 25–40%
Used with **LaMa** and **ZITS** to create controlled inpainting attacks.
---
## 📐 Metrics
Datasets are organized to support a **fixed-threshold robustness evaluation**.
- **Baseline AUC**
Distinguish clean **reals vs fakes**. Threshold `t*` chosen via **Youden’s J**.
- **Robustness AUC**
Distinguish **clean vs inpainted**.
- **ΔAUC = Baseline – Robustness**
- **ASR_inpainted** (primary):
% of **inpainted reals** classified as Real at baseline `t*`.
- **ASR_fake→real** (secondary):
% of **baseline-detected fakes** that flip to Real after inpainting.
**Watermarking detectors:**
- Thresholds fixed at **t90** and **t99** on clean watermarked images together with another threshold that is determined at baseline to reflect a real life setting.
- **ASR** = % attacked watermarked images where watermark is not detected.
- **AUC(clean vs attacked)** sanity check.
---
## 📦 Archive Sizes
- `detectors/aeroblade_datasets.tar.xz` — **1.5 GB**
- `detectors/dimd_datasets.tar.xz` — **117 MB**
- `detectors/dire_datasets.tar.xz` — **468 MB**
- `detectors/stablesig_datasets.tar.xz` — **924 MB**
- `detectors/treering_datasets.tar.xz` — **1.6 GB**
- `detectors/ufd_datasets.tar.xz` — **442 MB**
- `masks/masks_stablesig.tar.xz` — **2.2 MB**
- `masks/masks_treering_wm.tar.xz` — **1.2 MB**
---
## ⚙️ Usage
### Download & Extract
```python
from huggingface_hub import hf_hub_download
import tarfile, os
REPO = "eoguzakin/Robustness of AI-Generated Image Detection Against Localized Inpainting Attacks"
def fetch_and_extract(filename, target_dir):
path = hf_hub_download(repo_id=REPO, filename=filename, repo_type="dataset")
os.makedirs(target_dir, exist_ok=True)
with tarfile.open(path, "r:xz") as tar:
tar.extractall(target_dir)
print("Extracted:", target_dir)
# Example: UFD view + StableSig masks
fetch_and_extract("detectors/ufd_datasets.tar.xz", "/tmp/ufd")
fetch_and_extract("masks/masks_stablesig.tar.xz", "/tmp/masks_stablesig")
```
Integrity check
```bash
sha256sum -c checksums.sha256
```
🧪 Provenance
Reals: SEMI-TRUTHS (Pal et al. 2024), OpenImages subset.
Fakes: GenImage diverse generator set.
Inpainting attacks: LaMa (Suvorov et al. 2022), ZITS (Dong et al. 2022).
Watermarks: Stable Signature (Fernandez et al. 2023), Tree-Ring (Wen et al. 2023).
Detector-specific preprocessing applied only at runtime, ensuring comparability.
---
📚 Citations
If you use this dataset, please cite:
- Pal et al., 2024 — Semi-Truths: A Large-Scale Dataset of AI-Augmented Images for Evaluating Robustness of AI-Generated Image Detectors.
- Ojha et al., 2023 — Universal Fake Image Detectors.
- Corvi et al., 2023 — On the Detection of Synthetic Images Generated by Diffusion Models.
- Wang et al., 2023 — DIRE for Diffusion-Generated Image Detection.
- Ricker et al., 2024 — AEROBLADE. Fernandez et al., 2023 — Stable Signature.
- Wen et al., 2023 — Tree-Ring Watermarks.
- Suvorov et al., 2022 — LaMa Inpainting.
- Rombach et al., 2022 — Latent Diffusion Models. Dong et al., 2022 — ZITS Inpainting.
📝 License
Derived datasets for research use only.
Upstream datasets (SEMI-TRUTHS, GenImage, LaMa, ZITS, etc.) retain their original licenses.
This packaging (scripts + archive structure) is released under CC BY-NC 4.0 unless otherwise specified.
👤 Maintainer
Oguz Akin — Saarland University
Contact: [email protected]
🗓️ Changelog
v1.0 — Initial release with detector views + masks for LaMa and ZITS inpainting.
|