Augmentoolkit-DataSpecialist-v0.1-GGUF
This repository contains GGUF quantizations of the model Heralax/Augmentoolkit-DataSpecialist-v0.1.
Files are provided in GGUF format for llama.cpp-compatible runtimes (e.g., llama.cpp, LM Studio, KoboldCpp, etc.).
Source model
- Base model: Heralax/Augmentoolkit-DataSpecialist-v0.1 (Apache-2.0).
- Original base lineage (per upstream card): Heralax/datagen-pretrain-v1-7b-mistralv0.2 (Mistral 7B v0.2 derived).
Intended use
Text generation and conversational assistance for data-specialist workflows.
Quantization notes
- Filenames follow GGUF conventions including the quantization type suffix (e.g., Q4_K_M, Q5_K_M, Q8_0).
- No additional fine-tuning was performed; weights are a direct quantization of the upstream safetensors.
Prompt template
Use the chat template compatible with Mistral-style chat or your runtime’s default.
Hardware and runtimes
Tested with llama.cpp and LM Studio on Apple Silicon.
- Downloads last month
- 31
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.
Model tree for LukasF/Augmentoolkit-DataSpecialist-v0.1-GGUF
Base model
Heralax/datagen-pretrain-v1-7b-mistralv0.2