Lukas Filler
Create README.md
60ef351 verified
metadata
library_name: llama.cpp
pipeline_tag: text-generation
tags:
  - gguf
  - conversational
license: apache-2.0
base_model: Heralax/Augmentoolkit-DataSpecialist-v0.1
base_model_relation: quantized
language:
  - en
model-index:
  - name: Augmentoolkit-DataSpecialist-v0.1-GGUF
    results: []

Augmentoolkit-DataSpecialist-v0.1-GGUF

This repository contains GGUF quantizations of the model Heralax/Augmentoolkit-DataSpecialist-v0.1.
Files are provided in GGUF format for llama.cpp-compatible runtimes (e.g., llama.cpp, LM Studio, KoboldCpp, etc.).

Source model

  • Base model: Heralax/Augmentoolkit-DataSpecialist-v0.1 (Apache-2.0).
  • Original base lineage (per upstream card): Heralax/datagen-pretrain-v1-7b-mistralv0.2 (Mistral 7B v0.2 derived).

Intended use

Text generation and conversational assistance for data-specialist workflows.

Quantization notes

  • Filenames follow GGUF conventions including the quantization type suffix (e.g., Q4_K_M, Q5_K_M, Q8_0).
  • No additional fine-tuning was performed; weights are a direct quantization of the upstream safetensors.

Prompt template

Use the chat template compatible with Mistral-style chat or your runtime’s default.

Hardware and runtimes

Tested with llama.cpp and LM Studio on Apple Silicon.