xap's picture
Update README.md
1ea1ecb verified
|
raw
history blame
1.45 kB
metadata
library_name: transformers
tags: []

Model Card for Meta-Llama3-8B-Instruct-assessment

Model Details

Model Description

This is the model card of a Meta-Llama3-8B-Instruct-assessment model that has been developed by fine-tuning Meta-Llama3-8B-Instruct. The model is finetuned using LoRA and the model was loaded in 16bit. Low-Rank Adaptation, also known as LoRA, makes fine-tuning LLMs easier by reducing the number of trainable parameters to produce lightweight and efficient models. LoRA was utilized by modifying matrix rank 'r' and alpha values.

  • Developed by: xap
  • License: llama3
  • Finetuned from model : meta-llama/Meta-Llama-3-8B-Instruct
  • Finetuned using dataset : SelfCode2.0

How to Get Started with the Model

The dataset or input for this model should be in the alpaca format.

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("xap/Meta-Llama3-8B-Instruct-assessment")
base_model = AutoModelForCausalLM.from_pretrained("xap/Meta-Llama3-8B-Instruct-assessment")

lora_config = LoraConfig.from_pretrained("xap/Meta-Llama3-8B-Instruct-assessment")
model = PeftModel.from_pretrained(
    base_model,
    "xap/Meta-Llama3-8B-Instruct-assessment",
    lora_config=lora_config,
)