Text Generation
Transformers
PyTorch
Safetensors
llama
text-generation-inference
mfromm commited on
Commit
d49562c
·
verified ·
1 Parent(s): ce14779

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -74,7 +74,7 @@ After installation, here's an example of how to use the model:
74
  import torch
75
  from transformers import AutoModelForCausalLM, AutoTokenizer
76
 
77
- model_name = "EuropeanLLM-Beta/Teuken-7B-base-v0.6"
78
  prompt = "Insert text here..."
79
  device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
80
  tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
 
74
  import torch
75
  from transformers import AutoModelForCausalLM, AutoTokenizer
76
 
77
+ model_name = "openGPT-X/Teuken-7B-base-v0.6"
78
  prompt = "Insert text here..."
79
  device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
80
  tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)