File size: 3,817 Bytes
e93a8ff ffc0e1d 8fad408 d2d5712 2d7b956 d2d5712 ffc0e1d e93a8ff 4d45f57 e93a8ff 3bebb30 e93a8ff 3bebb30 e93a8ff a8034da d2d5712 a8034da 856675e 2e8ea74 ce02dcd 2e8ea74 856675e 2e8ea74 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
# Zenos GPT-J 6B Alpaca-Evol 4-bit
## Model Overview
- **Name:** zenos-gpt-j-6B-alpaca-evol-4bit
- **Datasets Used:** [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish), [Evol Instruct](https://huggingface.co/datasets/FreedomIntelligence/evol-instruct-spanish)
- **Architecture:** GPT-J
- **Model Size:** 6 Billion parameters
- **Precision:** 4 bits
- **Fine-tuning:** This model was fine-tuned using Low-Rank Adaptation (LoRa).
- **Content Moderation:** This model is not moderated.
## Description
Zenos GPT-J 6B Alpaca Evol 4-bit is a Spanish Instruction capable model based on the GPT-J architecture with 6 billion parameters. It has been fine-tuned on the Alpaca Spanish and Evol Instruct datasets, making it particularly suitable for natural language understanding and generation tasks in Spanish.
### Requirements
The following **specific** up-to-date forks are required in order to load and/or manipulate the present model. At least, until the existing PRs are approved in the main repositories. They allow saving and loading 4 bits model, with LoRa adapters included.
- [bitsandbytes](https://github.com/webpolis/bitsandbytes)
- [transformers](https://github.com/webpolis/transformers)
Since this is a compressed version (4 bits), it can fit into ~7GB of VRAM.
## Usage
You can use this model for various natural language processing tasks such as text generation, translation, summarization, and more. Below is an example of how to use it in Python with the Transformers library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("zenos-gpt-j-6B-alpaca-evol-4bit")
model = AutoModelForCausalLM.from_pretrained("zenos-gpt-j-6B-alpaca-evol-4bit")
# Generate text; watch out the padding between [INST] ... [/INST]
prompt = '[INST] Escribe un poema breve usando cuatro versos [/INST]'
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(model.device)
attention_mask = inputs["attention_mask"].to(model.device)
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=1,
repetition_penalty=1.5,
do_sample=True
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.eos_token_id,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=False,
max_new_tokens=512,
early_stopping=True
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
start_txt = output.find('### Respuesta:\n') + len('### Respuesta:\n')
end_txt = output.find("<|endoftext|>", start_txt)
answer = output[start_txt:end_txt]
print(answer)
```
# Inference
Currently, the HuggingFace's Inference Tool UI doesn't properly load the model. However, you can use it with regular Python code as shown above once you meet the [requirements](#requirements).
# Acknowledgments
This model was developed by [Nicolás Iglesias](mailto:[email protected]) using the Hugging Face Transformers library.
# LICENSE
Copyright 2023 [Nicolás Iglesias](mailto:[email protected])
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this software except in compliance with the License.
You may obtain a copy of the License at
[Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|