MaxedOut's picture
Update README.md
e76c4bd
|
raw
history blame
5.31 kB
metadata
license: other
tags:
  - comfyui
  - flux
  - sdxl
  - gguf
  - stable diffusion
  - t5xxl
  - controlnet
  - unet
  - vae
  - model hub
  - one click
  - upscaler

⚠️ Work in Progress
This repo is actively being developed. Especially the the model card. Use as you see fit but know some things won't be accurate or up to date yet.

ComfyUI-Starter-Packs

A curated vault of the most essential models for ComfyUI users. Flux1, SDXL, ControlNets, Clips, GGUFs all in one place. Carefully organized. Hit the heart at the top next to this repo's name if you find it useful. A small action done by many will let me know working on this repo is helpful.


🪜 What's Inside

This repo is a purposeful collection of the most important models organized in folders so whatever you need is all within that folder:

Flux1

  • Unet Models: Dev, Schnell, Depth, Canny, Fill
  • GGUF Versions: Q3, Q5, Q6 for each major branch
  • Clip + T5XXL encoders (standard + GGUF versions)
  • Loras: Only if are especially useful or improve the model.

SDXL

  • Reccomended Checkpoints to get you started Pony Realism and Juggernaut
  • Base + Refiner official models
  • ControlNets: Depth, Canny, OpenPose, Normal, etc.

Extra

  • VAE, upscalers, and anything required to support workflows

🏋️ Unet Recommendations (Based on VRAM)

VRAM Use Case Model Type
16GB+ Full FP8 flux1-dev-fp8.safetensors
12GB Balanced Q5_K_S GGUF flux1-dev-Q5_K_S.gguf
8GB Light Q3_K_S GGUF flux1-dev-Q3_K_S.gguf

GGUF models are significantly lighter and designed for low-VRAM systems.

🧠 T5XXL Recommendations (Based on Ram)

System RAM Use Case Model Type
64GB Max quality t5xxl_fp16.safetensors
32GB High quality (can crash if multitasking) t5xxl_fp16.safetensors or t5xxl_fp8_scaled.safetensors
16GB Balanced t5xxl_fp8_scaled.safetensors
<16GB Low-memory / Safe mode GGUF Q5_K_S or Q3_K_S

Quantized t5xxl only directly affects prompt adherence.

⚠️ These are recommended tiers, not hard rules. RAM usage depends on your active processes, ComfyUI extensions, batch sizes, and other factors.
If you're getting random crashes, try scaling down one tier.


🏛 Folder Structure

Adetailer/
├─ Ultralytics/bbox/
│  ├─ face_yolov8m.pt
│  └─ hand_yolov8s.pt
└─ sams/
   └─ sam_vit_b_01ec64.pth

Flux1/
├─ PuLID/
│  └─ pulid_flux_v0.9.1.safetensors
├─ Style_Models/
│  └─ flux1-redux-dev.safetensors
├─ clip/
│  ├─ ViT-L-14-TEXT-detail-improved-hiT-GmP-HF.safetensors
│  ├─ clip_l.safetensors
│  ├─ t5xxl_fp16.safetensors
│  ├─ t5xxl_fp8_e4m3fn_scaled.safetensors
│  └─ GGUF/
│     ├─ t5-v1_1-xxl-encoder-Q3_K_L.gguf
│     └─ t5-v1_1-xxl-encoder-Q5_K_M.gguf
├─ clip_vision/
│  └─ sigclip_vision_patch14_384.safetensors
├─ vae/
│  └─ ae.safetensors
└─ unet/
   ├─ Dev/
   │  ├─ flux1-dev-fp8.safetensors
   │  └─ GGUF/
   │     ├─ flux1-dev-Q3_K_S.gguf
   │     └─ flux1-dev-Q5_K_S.gguf
   ├─ Fill/
   │  ├─ flux1-fill-dev-fp8.safetensors
   │  └─ GGUF/
   │     ├─ flux1-fill-dev-Q3_K_S.gguf
   │     └─ flux1-fill-dev-Q5_K_S.gguf
   ├─ Canny/
   │  ├─ flux1-canny-dev-fp8.safetensors
   │  └─ GGUF/
   │     ├─ flux1-canny-dev-Q4_0-GGUF.gguf
   │     └─ flux1-canny-dev-Q5_0-GGUF.gguf
   ├─ Depth/
   │  ├─ flux1-depth-dev-fp8.safetensors
   │  └─ GGUF/
   │     ├─ flux1-depth-dev-Q4_0-GGUF.gguf
   │     └─ flux1-depth-dev-Q5_0-GGUF.gguf
   └─ Schnell/
      ├─ flux1-schnell-fp8-e4m3fn.safetensors
      └─ GGUF/
         ├─ flux1-schnell-Q3_K_S.gguf
         └─ flux1-schnell-Q5_K_S.gguf

📈 Model Previews (Coming Soon)

I might add a single grid-style graphic showing example outputs:

  • Dev vs Schnell: Quality vs Speed
  • Depth / Canny / Fill: Source image → processed map → output
  • SDXL examples: Realism, Stylized, etc.

All preview images will be grouped into a single efficient visual block for each group.


📢 Want It Even Easier?

Skip the manual downloads.

🎁 Patreon.com/MaxedOut — (Coming Soon) Get:

  • One-click installers for all major Flux & SDXL workflows
  • ComfyUI workflows built for beginners and pros
  • Behind-the-scenes model picks and tips

❓ FAQ

Q: Why not every GGUF?
A: Because Q3, Q5, and Q6 cover the most meaningful range. No bloat.

Q: Are these the official models?
A: Yes. Most are sourced directly from creators, or validated mirrors.

Q: Will this grow?
A: Yes. But only with purpose. Not a collection of every model off the face of the earth.

Q: Why aren’t there more Loras here?
A: Stylized or niche Loras are showcased on Patreon, where we do deeper dives and examples. Some may get added here later if they become foundational.


✨ Final Thoughts

You shouldn’t need to hunt through 12 Civitai pages and 6 hugging face repos just to build your ComfyUI folder.

This repo fixes that.