|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
datasets: |
|
- nampdn-ai/tiny-codes |
|
- nlpai-lab/openassistant-guanaco-ko |
|
- philschmid/guanaco-sharegpt-style |
|
language: |
|
- ko |
|
- en |
|
inference: false |
|
tags: |
|
- unsloth |
|
- phi-3 |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# Phi-3-medium-4k-instruct-ko-poc-v0.1 |
|
|
|
## Model Details |
|
This model is trained using unsloth toolkit based on Microsoft's phi-3 Phi-3-medium-4k-instruct model (https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) with some Korean instruction data added to enhance its Korean generation performance |
|
|
|
Since my role is not as a working developer, but as ML Technical Specialist helping customers with quick PoCs/prototypes, and I was limited by Azure GPU resources available, I only trained with 40,000 samples on a single VM Azure Standard_NC24ads_A100_v4 for PoC purposes. Because I have not done any tokenizer extensions, you need a lot more tokens than English for text generation. |
|
|
|
### Dataset |
|
|
|
The dataset used for training is as follows. To prevent catastrophic forgetting, I included non-Korean corpus as training data. Note that we did not use all of the data, but only sampled some of it. Korean textbooks were converted to Q&A format. The Guanaco dataset has been reformatted to fit the multiturn format like <|user|>\n{Q1}<|end|>\n<|assistant|>\n{A1}<|end|>\n<|user|>\n{Q2}<|end|>\n<|assistant|>\n{A2}<|end|>. |
|
|
|
- Korean textbooks (https://huggingface.co/datasets/nampdn-ai/tiny-codes) |
|
- Korean translation of Guanaco (https://huggingface.co/datasets/nlpai-lab/openassistant-guanaco-ko) |
|
- Guanaco Sharegpt style (https://huggingface.co/datasets/philschmid/guanaco-sharegpt-style) |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
### Code snippets |
|
```python |
|
### Load model |
|
import torch |
|
from unsloth import FastLanguageModel |
|
from unsloth.chat_templates import get_chat_template |
|
from transformers import TextStreamer |
|
|
|
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally! |
|
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ |
|
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. |
|
model_path = "daekeun-ml/Phi-3-medium-4k-instruct-ko-poc-v0.1" |
|
|
|
model, tokenizer = FastLanguageModel.from_pretrained( |
|
model_name = model_tar_dir, # Choose ANY! eg teknium/OpenHermes-2.5-Mistral-7B |
|
max_seq_length = max_seq_length, |
|
dtype = dtype, |
|
load_in_4bit = load_in_4bit, |
|
# token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf |
|
) |
|
tokenizer = get_chat_template( |
|
tokenizer, |
|
chat_template = "phi-3", # Supports zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth |
|
mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"}, # ShareGPT style |
|
) |
|
|
|
params = { |
|
"max_new_tokens": 256, |
|
"use_cache": True, |
|
"temperature": 0.05, |
|
"do_sample": True |
|
} |
|
|
|
### Inference |
|
FastLanguageModel.for_inference(model) # Enable native 2x faster inference |
|
|
|
# 1st example |
|
messages = [ |
|
{"from": "human", "value": "Continue the fibonnaci sequence in Korean: 1, 1, 2, 3, 5, 8,"}, |
|
{"from": "assistant", "value": "피보나치 수열의 다음 숫자는 13, 21, 34, 55, 89 등입니다. 각 숫자는 앞의 두 숫자의 합입니다."}, |
|
{"from": "human", "value": "Compute 2x+3=12 in Korean"}, |
|
] |
|
inputs = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize = True, |
|
add_generation_prompt = True, # Must add for generation |
|
return_tensors = "pt", |
|
).to("cuda") |
|
|
|
text_streamer = TextStreamer(tokenizer) |
|
_ = model.generate(input_ids = inputs, streamer = text_streamer, **params) |
|
|
|
# 2nd example |
|
messages = [ |
|
{"from": "human", "value": "What is Machine Learning in Korean?"}, |
|
{"from": "assistant", "value": "인공지능의 한 분야로 방대한 데이터를 분석해 향후 패턴을 예측하는 기법입니다."}, |
|
{"from": "human", "value": "What is Deep Learning in Korean?"}, |
|
] |
|
inputs = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize = True, |
|
add_generation_prompt = True, # Must add for generation |
|
return_tensors = "pt", |
|
).to("cuda") |
|
|
|
from transformers import TextStreamer |
|
text_streamer = TextStreamer(tokenizer) |
|
_ = model.generate(input_ids = inputs, streamer = text_streamer, **params) |
|
``` |
|
|
|
### Inference results |
|
```python |
|
# 1st example |
|
<s><|user|> Continue the fibonnaci sequence in Korean: 1, 1, 2, 3, 5, 8,<|end|><|assistant|> 피보나치 수열의 다음 숫자는 13, 21, 34, 55, 89 등입니다. 각 숫자는 앞의 두 숫자의 합입니다.<|end|><|user|> Compute 2x+3=12 in Korean<|end|><|assistant|> 방정식 2x + 3 = 12에서 x를 풀려면 다음 단계를 따르십시오. |
|
|
|
1. 방정식의 양쪽에서 3을 빼서 방정식의 한쪽에 대해 x를 분리합니다. |
|
|
|
2x + 3 - 3 = 12 - 3 |
|
|
|
2x = 9 |
|
|
|
2. 이제 방정식의 양쪽을 2로 나누어 x의 값을 구합니다. |
|
|
|
2배 / 2 = 9 / 2 |
|
|
|
x = 4.5 |
|
|
|
따라서 방정식 2x + 3 = 12에 대한 해는 x = 4.5입니다.<|end|> |
|
|
|
# 2nd example |
|
<s><|user|> What is Machine Learning in Korean?<|end|><|assistant|> 인공지능의 한 분야로 방대한 데이터를 분석해 향후 패턴을 예측하는 기법입니다.<|end|><|user|> What is Deep Learning in Korean?<|end|><|assistant|> 복잡한 데이터 세트를 분석하고 복잡한 패턴을 인식하고 학습하는 데 사용되는 딥러닝은 많은 레이어로 구성된 신경망의 하위 집합입니다. 이 기술은 이미지 인식, 자연어 처리 및 자율 운전과 같은 다양한 응용 분야에서 큰 발전을 이뤘습니다.<|end|> |
|
``` |
|
|
|
### References |
|
- Base model: [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) |
|
|
|
## Notes |
|
|
|
### License |
|
|
|
apache 2.0; The license of phi-3 is MIT, but I considered the licensing of the dataset and library used for training. |
|
|
|
### Caution |
|
This model was created as a personal experiment, unrelated to the organization I work for. The model may not operate correctly because separate verification was not performed. Please be careful unless it is for personal experimentation or PoC (Proof of Concept)! |