Datasets:
image
imagewidth (px) 512
512
|
---|
Robustness of AI-Generated Image Detection Against Localized Inpainting Attacks
This repository hosts the detector-ready datasets and mask packs used in the thesis:
Robustness of AI-Generated Image Detection Against Localized Inpainting Attacks
Oguz Akin, Saarland University, CISPA Helmholtz Center for Information Security (2025)
It provides standardized evaluation splits for six state-of-the-art AI-generated image (AIGI) detectors across watermarking, passive, and training-free paradigms, tested under LaMa and ZITS inpainting attacks.
Everything is packaged as .tar.xz
archives to ensure reproducibility and easy transfer.
📂 Repository Structure
. ├─ detectors/ │ ├─ ufd_datasets.tar.xz │ ├─ dimd_datasets.tar.xz │ ├─ dire_datasets.tar.xz │ ├─ aeroblade_datasets.tar.xz │ ├─ stablesig_datasets.tar.xz │ └─ treering_datasets.tar.xz ├─ masks/ │ ├─ masks_stablesig.tar.xz │ └─ masks_treering_wm.tar.xz └─ checksums.sha256
- detectors/ — per-detector dataset “views,” already resized/re-encoded into the formats expected by each model.
- masks/ — random-rectangle and random-blob object masks (area-binned), used to generate inpainting attacks.
- checksums.sha256 — SHA-256 integrity hashes for all archives.
🔎 Dataset Details
Detector Views
Each archive expands into the exact layout expected by that detector. All splits contain 200 images per split (e.g. LaMa Inpainted Rand-Blob on SEMI-TRUTHS real images).
Typical layout: baseline/ reals/ fakes/ [fakes_inpainted_lama/, fakes_inpainted_zits/]
robustness/ inpainted_lama/ randrect/ randblob_bins/bin{1..4}/ inpainted_zits/ randrect/ randblob_bins/bin{1..4}/ [reals_inpainted/]
Detector Input Handling
On disk: All datasets are stored with their preprocessed versions for each detector to match their original paper/training setup.
- UFD → 224
(Resized + center-cropped to 224×224, CLIP normalization.) - DIMD → JPEG-256
(Resized to 256×256, with JPEG round-trip to mimic training distribution.) - DIRE → 256
(Resized to 256×256, matching the ADM ImageNet-256 diffusion prior.) - AEROBLADE / StableSig / Tree-Ring → 512
(All evaluated directly at 512×512 without JPEG compression.)
Why this split? To eliminate the effect of compression or size on classification, ensuring scientifically fair evaluation.
Mask Packs
- masks_stablesig.tar.xz
- masks_treering_wm.tar.xz
Contain random rectangle and random blob masks, binned by area ratio:
bin1_0-3
→ 0–3% of image areabin2_3-10
→ 3–10%bin3_10-25
→ 10–25%bin4_25-40
→ 25–40%
Used with LaMa and ZITS to create controlled inpainting attacks.
📐 Metrics
Datasets are organized to support a fixed-threshold robustness evaluation.
- Baseline AUC
Distinguish clean reals vs fakes. Thresholdt*
chosen via Youden’s J. - Robustness AUC
Distinguish clean vs inpainted. - ΔAUC = Baseline – Robustness
- ASR_inpainted (primary):
% of inpainted reals classified as Real at baselinet*
. - ASR_fake→real (secondary):
% of baseline-detected fakes that flip to Real after inpainting.
Watermarking detectors:
- Thresholds fixed at t90 and t99 on clean watermarked images together with another threshold that is determined at baseline to reflect a real life setting.
- ASR = % attacked watermarked images where watermark is not detected.
- AUC(clean vs attacked) sanity check.
📦 Archive Sizes
detectors/aeroblade_datasets.tar.xz
— 1.5 GBdetectors/dimd_datasets.tar.xz
— 117 MBdetectors/dire_datasets.tar.xz
— 468 MBdetectors/stablesig_datasets.tar.xz
— 924 MBdetectors/treering_datasets.tar.xz
— 1.6 GBdetectors/ufd_datasets.tar.xz
— 442 MBmasks/masks_stablesig.tar.xz
— 2.2 MBmasks/masks_treering_wm.tar.xz
— 1.2 MB
⚙️ Usage
Download & Extract
from huggingface_hub import hf_hub_download
import tarfile, os
REPO = "eoguzakin/Robustness of AI-Generated Image Detection Against Localized Inpainting Attacks"
def fetch_and_extract(filename, target_dir):
path = hf_hub_download(repo_id=REPO, filename=filename, repo_type="dataset")
os.makedirs(target_dir, exist_ok=True)
with tarfile.open(path, "r:xz") as tar:
tar.extractall(target_dir)
print("Extracted:", target_dir)
# Example: UFD view + StableSig masks
fetch_and_extract("detectors/ufd_datasets.tar.xz", "/tmp/ufd")
fetch_and_extract("masks/masks_stablesig.tar.xz", "/tmp/masks_stablesig")
Integrity check
sha256sum -c checksums.sha256
🧪 Provenance Reals: SEMI-TRUTHS (Pal et al. 2024), OpenImages subset.
Fakes: GenImage diverse generator set.
Inpainting attacks: LaMa (Suvorov et al. 2022), ZITS (Dong et al. 2022).
Watermarks: Stable Signature (Fernandez et al. 2023), Tree-Ring (Wen et al. 2023).
Detector-specific preprocessing applied before runtime, ensuring comparability.
📸 Sample Images
Baseline (Real vs Fake)
Inpainted reals (LaMa, ZITS & SEMI-TRUTHS)
Watermarks (StableSig vs Tree-Ring)
📚 Citations If you use this dataset, please cite:
- Pal et al., 2024 — Semi-Truths: A Large-Scale Dataset of AI-Augmented Images for Evaluating Robustness of AI-Generated Image Detectors.
- Ojha et al., 2023 — Universal Fake Image Detectors.
- Corvi et al., 2023 — On the Detection of Synthetic Images Generated by Diffusion Models.
- Wang et al., 2023 — DIRE for Diffusion-Generated Image Detection.
- Ricker et al., 2024 — AEROBLADE. Fernandez et al., 2023 — Stable Signature.
- Wen et al., 2023 — Tree-Ring Watermarks.
- Suvorov et al., 2022 — LaMa Inpainting.
- Rombach et al., 2022 — Latent Diffusion Models. Dong et al., 2022 — ZITS Inpainting.
📝 License Derived datasets for research use only. Upstream datasets (SEMI-TRUTHS, GenImage, LaMa, ZITS, etc.) retain their original licenses. This packaging (scripts + archive structure) is released under CC BY-NC 4.0 unless otherwise specified.
👤 Maintainer Oguz Akin — Saarland University Contact: [email protected]
🗓️ Changelog v1.0 — Initial release with detector views + masks for LaMa and ZITS inpainting.
- Downloads last month
- 104