normundsg commited on
Commit
5e0a69d
·
verified ·
1 Parent(s): 39c1903

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -3
README.md CHANGED
@@ -1,3 +1,21 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - lv
5
+ base_model:
6
+ - meta-llama/Llama-3.1-8B-Instruct
7
+ tags:
8
+ - legal
9
+ ---
10
+
11
+ This is a fine-tuned version of the **Llama3.1-8B-Instruct** model, adapted for answering questions about legislation in Latvia. The model was fine-tuned on a [dataset](http://hdl.handle.net/20.500.12574/130) of ~15 thousand question–answer pairs sourced from the [LVportals.lv](https://lvportals.lv/e-konsultacijas) archive.
12
+
13
+ A quantized version of the model is available for use with Ollama or other local LLM runtime environments that support the GGUF format.
14
+
15
+ The data preparation, fine-tuning process, and comprehensive evaluation are described in more detail in:
16
+
17
+ > Artis Pauniņš. Evaluation and Adaptation of Large Language Models for Question-Answering on Legislation. Master’s Thesis. University of Latvia, 2025.
18
+
19
+ **Note**:
20
+
21
+ The model may occasionally generate overly long responses. To prevent this, it is recommended to set the `num_predict` parameter to limit the number of tokens generated - either in your Python code or in the `Modelfile`, depending on how the model is run.