|
--- |
|
license: apache-2.0 |
|
language: |
|
- lv |
|
base_model: |
|
- meta-llama/Llama-3.1-8B-Instruct |
|
tags: |
|
- legal |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
This is a fine-tuned version of the **Llama3.1-8B-Instruct** model, adapted for answering questions about legislation in Latvia. The model was fine-tuned on a [dataset](http://hdl.handle.net/20.500.12574/130) of ~15 thousand question–answer pairs sourced from the [LVportals.lv](https://lvportals.lv/e-konsultacijas) archive. |
|
|
|
Quantized versions of the model are available for use with Ollama and other local LLM runtime environments that support the GGUF format. |
|
|
|
The data preparation, fine-tuning process, and comprehensive evaluation are described in more detail in: |
|
|
|
> Artis Pauniņš. Evaluation and Adaptation of Large Language Models for Question-Answering on Legislation. Master’s Thesis. University of Latvia, 2025. |
|
|
|
**Note**: |
|
|
|
The model may occasionally generate overly long responses. To prevent this, it is recommended to set the `num_predict` parameter to limit the number of tokens generated - either in your Python code or in the `Modelfile`, depending on how the model is run. |