Instructed Llama-eus-8B [DEPRECATED]

First instructed version of Llama-eus-8B. Curated by instruction tuning the base model with SlimOrca_eu Basque instructions and preference tuned with ultrafeedback_eu.

📕 Paper: Pipeline Analysis for Developing Instruct LLMs in Low-Resource Languages: A Case Study on Basque

License

Llama 3.1 is licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.

Acknowledgments

This work is part of the BasqueLLM project, titled "First steps towards an artificial intelligence in Basque based on LLMs" (EXP: 2023-CIEN-000081-01), partially funded by the Guipuzcoa Science, Technology and Innovation Network Program of the Provincial Council of Gipuzkoa. Model training and development were conducted using the Hyperion system at the Donostia International Physics Center (DIPC).

Citation

If you use Llama-eus-8B please cite the following reference:

@inproceedings{corral-etal-2025-pipeline,
    title = "Pipeline Analysis for Developing Instruct {LLM}s in Low-Resource Languages: A Case Study on {B}asque",
    author = "Corral, Ander and Antero, Ixak Sarasua and Saralegi, Xabier",
    editor = "Chiruzzo, Luis and Ritter, Alan and Wang, Lu",
    booktitle = "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
    month = apr,
    year = "2025",
    address = "Albuquerque, New Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.naacl-long.629/",
    pages = "12636--12655",
    ISBN = "979-8-89176-189-6",
    abstract = "Large language models (LLMs) are typically optimized for resource-rich languages like English, exacerbating the gap between high-resource and underrepresented languages. This work presents a detailed analysis of strategies for developing a model capable of following instructions in a low-resource language, specifically Basque, by focusing on three key stages: pre-training, instruction tuning, and alignment with human preferences. Our findings demonstrate that continual pre-training with a high-quality Basque corpus of around 600 million words improves natural language understanding (NLU) of the foundational model by over 12 points. Moreover, instruction tuning and human preference alignment using automatically translated datasets proved highly effective, resulting in a 24-point improvement in instruction-following performance. The resulting models, Llama-eus-8B and Llama-eus-8B-instruct, establish a new state-of-the-art for Basque in the sub-10B parameter category."
}

Contact

Downloads last month
28
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for orai-nlp/Llama-eus-8B-Instruct-v1

Finetuned
(8)
this model

Datasets used to train orai-nlp/Llama-eus-8B-Instruct-v1

Collection including orai-nlp/Llama-eus-8B-Instruct-v1