Denoising Diffusion Probabilistic Models
Paper
•
2006.11239
•
Published
•
9
A fine-tuned diffusion model for generating high-quality anime faces using DDPM. This model is based on Google's pre-trained ddpm-celebahq-256 model and fine-tuned on 7,000+ anime face images.
from diffusers import DDPMPipeline
import torch
# Load the model
pipeline = DDPMPipeline.from_pretrained("abcd2019/Anime-face-generation")
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = pipeline.to(device)
# Generate a single image
image = pipeline(num_inference_steps=100).images[0]
image.save("anime_face.png")
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained("abcd2019/Anime-face-generation")
pipeline = pipeline.to("cuda")
# Generate 5 anime faces
images = pipeline(batch_size=5, num_inference_steps=100).images
for i, image in enumerate(images):
image.save(f"anime_face_{i}.png")
# Fast generation (fewer steps, less quality)
fast_image = pipeline(num_inference_steps=50).images[0]
# High quality (more steps, slower)
quality_image = pipeline(num_inference_steps=150).images[0]
# Recommended: 100 steps for good balance
balanced_image = pipeline(num_inference_steps=100).images[0]
from diffusers import DDPMPipeline, DDIMScheduler
pipeline = DDPMPipeline.from_pretrained("abcd2019/Anime-face-generation")
# Switch to DDIM for faster sampling
scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
scheduler.set_timesteps(num_inference_steps=50)
pipeline.scheduler = scheduler
fast_image = pipeline().images[0] # Generates in ~50 steps instead of 1000
ddpm-celebahq-256This model generates synthetic anime faces and should not be used to:
If you use this model in your research or project, please credit:
ddpm-celebahq-256 modelPotential enhancements for future versions:
Created: 2025-12-28
Model Card Contact: [Your Name/Username]
Totally Free + Zero Barriers + No Login Required