Aswathy Velutharambath commited on
Commit
3b9d6f5
·
1 Parent(s): 2823bfc
Files changed (1) hide show
  1. README.md +79 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Autor-Regulatory Focus Classifier (German)
2
+
3
+ This model is a fine-tuned transformer-based classifier that detects the **regulatory focus** in German-language text, classifying whether the language expresses a **promotion** (aspirational, growth-oriented) or **prevention** (safety, obligation-oriented) focus.
4
+
5
+ It is fine-tuned on top of a German-language base model for the task of binary text classification.
6
+
7
+ ## Model Details
8
+
9
+ - **Base model**: `deepset/gbert-large`
10
+ - **Fine-tuned for**: Binary classification (Regulatory Focus)
11
+ - **Language**: German
12
+ - **Framework**: Hugging Face Transformers
13
+ - **Model format**: `safetensors`
14
+
15
+ ## Use Cases
16
+
17
+ - Social psychology and communication research
18
+ - Marketing and consumer behavior analysis
19
+ - Literary or political discourse analysis
20
+ - Behavioral modeling and goal orientation profiling
21
+
22
+ ## Example Usage
23
+
24
+ ```python
25
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
26
+ import torch
27
+
28
+ model = AutoModelForSequenceClassification.from_pretrained("aveluth/author_regulatory_focus_classifier")
29
+ tokenizer = AutoTokenizer.from_pretrained("aveluth/author_regulatory_focus_classifier")
30
+
31
+ text = ""
32
+ inputs = tokenizer(text, return_tensors="pt")
33
+ outputs = model(**inputs)
34
+ predicted_class = torch.argmax(outputs.logits).item()
35
+
36
+ print("Predicted class:", "prevention" if predicted_class == 0 else "promotion")
37
+ ```
38
+
39
+ > Make sure to replace `"your-username/..."` with the correct model path.
40
+
41
+ ## Labels
42
+
43
+ | Class | Description |
44
+ |-------------|----------------------------------------|
45
+ | `0` | Prevention-focused language |
46
+ | `1` | Promotion-focused language |
47
+
48
+ ## Training Details
49
+
50
+ - **Training data**: Custom labeled corpus based on psychological framing
51
+ - **Loss function**: Cross-entropy
52
+ - **Optimizer**: AdamW
53
+ - **Epochs**: 4
54
+ - **Learning rate**: 3e-5
55
+
56
+ ## Limitations
57
+
58
+ - Trained on German-language data only
59
+ - Performance may vary on out-of-domain text (e.g., technical manuals, poetry)
60
+ - May not generalize across all cultural framings of regulatory focus
61
+
62
+ ## License
63
+
64
+ [MIT](LICENSE)
65
+
66
+ ## Citation
67
+
68
+ If you use this model in your research, please cite:
69
+
70
+ ```bibtex
71
+ @article{velutharambath2023prevention,
72
+ title={Prevention or Promotion? Predicting Author's Regulatory Focus},
73
+ author={Velutharambath, Aswathy and Sassenberg, Kai and Klinger, Roman},
74
+ journal={Northern European Journal of Language Technology},
75
+ volume={9},
76
+ number={1},
77
+ year={2023}
78
+ }
79
+ ```