CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:1e-05
Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
 - Layer: 8
 - Component: hook_resid_post
 
Model Architecture
- Input Dimension: 768
 - SAE Dimension: 49,152
 - Expansion Factor: x64 (vanilla architecture)
 - Activation Function: ReLU
 - Initialization: encoder_transpose_decoder
 - Context Size: 50 tokens
 
Performance Metrics
- L1 Coefficient: 1e-05
 - L0 Sparsity: 1586.5746
 - Explained Variance: 0.9823 (98.23%)
 
Training Configuration
- Learning Rate: 0.0004
 - LR Scheduler: Cosine Annealing with Warmup (200 steps)
 - Epochs: 10
 - Gradient Clipping: 1.0
 - Device: NVIDIA Quadro RTX 8000
 
Experiment Tracking:
- Weights & Biases Run ID: lbjuvwfd
 - Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/lbjuvwfd/overview
 - Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
 
Citation
@misc{2024josephsparseautoencoders,
    title={Sparse Autoencoders for CLIP-ViT-B-32},
    author={Joseph, Sonia},
    year={2024},
    publisher={Prisma-Multimodal},
    url={https://huggingface.co/Prisma-Multimodal},
    note={Layer 8, hook_resid_post, Run ID: lbjuvwfd}
}
- Downloads last month
 - 21