Input Models input text only.

Output Models generate text only.

Base Model beomi/Yi-Ko-6B

Training Dataset

Implementation Code

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "ifuseok/yi-ko-playtus-instruct-v0.2"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)

Prompt Example

<|system|>
시스템 메시지 입니다. <|endoftext|>
<|user|>
유저  입니다.<|endoftext|>
<|assistant|>
어시스턴트 입니다.<|endoftext|>
Downloads last month
1,088
Safetensors
Model size
6.18B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ifuseok/yi-ko-playtus-instruct-v0.2

Quantizations
1 model

Datasets used to train ifuseok/yi-ko-playtus-instruct-v0.2

Space using ifuseok/yi-ko-playtus-instruct-v0.2 1