File size: 3,595 Bytes
d01cbdc 6bbb8f3 d01cbdc 8e708a6 826f0f7 d01cbdc d357b02 d01cbdc 8e708a6 d01cbdc 4117a1b f93a2a6 4117a1b f93a2a6 8e708a6 f93a2a6 8e708a6 4117a1b f93a2a6 4117a1b f93a2a6 4117a1b f93a2a6 4117a1b f93a2a6 4117a1b f93a2a6 4117a1b f93a2a6 4117a1b f93a2a6 4117a1b f93a2a6 4117a1b f93a2a6 4117a1b f93a2a6 4117a1b 6bbb8f3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
---
language:
- en
- mr
- hi
- gu
- pa
- te
- ta
- ml
- kn
- sd
- ne
- ur
- as
- bn
- or
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: google/gemma-2b
pipeline_tag: text-generation
---
<img src="https://github.com/Pmking27/AutoTalker/assets/97112558/96853321-e460-4464-a062-9bd1633964d8" width="600" height="600">
# Uploaded model
- **Developed by:** pmking27
- **License:** apache-2.0
- **Finetuned from model :** google/gemma-2b
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
### Running Model:
```python
# Importing necessary modules
from transformers import AutoModelForCausalLM, AutoTokenizer
# Setting the device to load the model onto (assuming GPU availability)
device = 'cuda'
# Loading the tokenizer for the model
tokenizer = AutoTokenizer.from_pretrained("pmking27/PrathameshLLM-2B")
# Loading the pre-trained model
model = AutoModelForCausalLM.from_pretrained("pmking27/PrathameshLLM-2B")
# Defining the Alpaca prompt template
alpaca_prompt = """
### Instruction:
{}
### Input:
{}
### Response:
{}"""
# Providing the input to the model
model_inputs = tokenizer(
[
alpaca_prompt.format(
'''
You're an assistant trained to answer questions using the given context.
context:
General elections will be held in India from 19 April 2024 to 1 June 2024 to elect the 543 members of the 18th Lok Sabha. The elections will be held in seven phases and the results will be announced on 4 June 2024. This will be the largest-ever election in the world, surpassing the 2019 Indian general election, and will be the longest-held general elections in India with a total span of 44 days (excluding the first 1951–52 Indian general election). The incumbent prime minister Narendra Modi who completed a second term will be contesting elections for a third consecutive term.
Approximately 960 million individuals out of a population of 1.4 billion are eligible to participate in the elections, which are expected to span a month for completion. The Legislative assembly elections in the states of Andhra Pradesh, Arunachal Pradesh, Odisha, and Sikkim will be held simultaneously with the general election, along with the by-elections for 35 seats among 16 states.
''', # instruction
"भारतातील सार्वत्रिक निवडणुका किती टप्प्यात पार पडतील?", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt")
# Moving model inputs to the specified device
model_inputs.to(device)
model.to(device)
# Generating responses from the model
outputs = model.generate(**model_inputs, max_new_tokens=100)
decoded_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
# Finding the start and end positions of the response
start_marker = "### Response:"
end_marker = "<eos>"
start_pos = decoded_output.find(start_marker) + len(start_marker)
end_pos = decoded_output.find(end_marker, start_pos)
# Extracting the response text
response_text = decoded_output[start_pos:end_pos].strip()
print(response_text)
```
### Output:
```markdown
भारतातील सार्वत्रिक निवडणुका 7 टप्प्यांमध्ये पार पडतील.
```
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |