🌌 CaptainErisNebula-12B-Chimera-v1.1

⚡ Quantized Models
Optimized versions for faster inference and lower memory usage:
🟢 Lewdiculus's Imatrix GGUF's <3
GGUF Format (IQ-Imatrix)
🟢 Nitral's 4bpw EXL3
EXL3 Format
⚙️ Model Details
Feature | Description |
---|---|
Size | 12B Parameters. |
Library | Transformers |
Composition | Blending Chimera versions v1 with v0.420, this v1.1 release sharpens reasoning while preserving creativity. |
🗒️ Community Note:
This is my final open-source model for now, thank you for being part of this strange, but oddly beautiful mess of a journey... Arrivederci, friends 🚀 -Nitral-AI- Downloads last month
- 164
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support