File size: 1,139 Bytes
5d7fac2
 
 
 
 
dc37407
5d7fac2
dc37407
4bd6e91
dc37407
5d7fac2
dc37407
5d7fac2
dc37407
5d7fac2
 
 
 
dc37407
 
 
 
 
 
 
 
 
 
5d7fac2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
license: mit
base_model:
- utter-project/EuroLLM-9B-Instruct
---

# EuroLLM  QLoRA – Grounding Act Classification

This model is a fine-tuned version of [EuroLLM-9B-Instruct](https://huggingface.co/utter-project/EuroLLM-9B-Instruct) optimized using QLoRA for efficient binary classification of German dialogue utterances into:

ADVANCE: Contribution that moves the dialogue forward (e.g. confirmations, follow-ups, elaborations)

NON-ADVANCE: Other utterances (e.g. vague responses, misunderstandings, irrelevant comments)

## Use Cases
Dialogue system analysis
Teacher-student interaction classification
Grounding in institutional advising or classroom discourse

## How to Use

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel, PeftConfig

peft_config = PeftConfig.from_pretrained("MB55/EuroLLM-Classifier-QLoRA")
base_model = AutoModelForSequenceClassification.from_pretrained(peft_config.base_model_name_or_path)
model = PeftModel.from_pretrained(base_model, "MB55/EuroLLM-Classifier-QLoRA")
tokenizer = AutoTokenizer.from_pretrained("MB55/EuroLLM-Classifier-QLoRA")