shouryashashank commited on
Commit
d09b2c1
·
verified ·
1 Parent(s): f81cd78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -3
README.md CHANGED
@@ -1,3 +1,109 @@
1
- ---
2
- license: agpl-3.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: agpl-3.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
7
+ library_name: predacons
8
+ tags:
9
+ - 'reasoning '
10
+ - chain of thought
11
+ - problem solving
12
+ ---
13
+
14
+ ## Model Details
15
+
16
+ ### Model Description
17
+
18
+ Predacon/Pico-R1-1.5b
19
+ Model Overview: Predacon/Pico-R1-1.5b is a highly efficient and accurate language model fine-tuned on the “deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B” base model. Despite its compact size of just 0.99GB, it delivers exceptional performance, particularly in tasks requiring logical reasoning and structured thought processes.
20
+
21
+
22
+ - **Developed by:** [Shourya Shashank](https://huggingface.co/shouryashashank)
23
+ - **Model type:** Transformer-based Language Model
24
+ - **Language(s) (NLP):** English
25
+ - **License:** AGPL-3.0
26
+ - **Finetuned from model [optional]:** deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
27
+
28
+
29
+ #### Key Features:
30
+
31
+ * **Compact Size**: At only 1.6GB, it is lightweight and easy to deploy, making it suitable for environments with limited computational resources.
32
+ * **High Accuracy**: The model’s training on a specialized chain of thought and reasoning dataset enhances its ability to perform complex reasoning tasks with high precision.
33
+ * **Fine-Tuned on Qwen1.5-1.8B**: Leveraging the robust foundation of the “deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B” model, it inherits strong language understanding and generation capabilities.
34
+
35
+ #### Applications:
36
+
37
+ * **Educational Tools**: Ideal for developing intelligent tutoring systems that require nuanced understanding and explanation of concepts.
38
+ * **Customer Support**: Enhances automated customer service systems by providing accurate and contextually relevant responses.
39
+ * **Research Assistance**: Assists researchers in generating hypotheses, summarizing findings, and exploring complex datasets.
40
+
41
+
42
+
43
+ ## Uses
44
+ * Lightweight: The software is designed to be extremely lightweight, ensuring it can run efficiently on any system without requiring extensive resources.
45
+ * Natural Language Understanding: Ideal for applications requiring human-like text understanding and generation, such as chatbots, virtual assistants, and content generation tools.
46
+ * Small Size: Despite its compact size of just 0.99GB, it packs a powerful punch, making it easy to download and install.
47
+ * High Reliability: The reliability is significantly enhanced due to the chain-of-thought approach integrated into its design, ensuring consistent and accurate performance.
48
+ ### Direct Use
49
+
50
+ * Problem Explanation: Generate detailed descriptions and reasoning for various problems, useful in educational contexts, customer support, and automated troubleshooting.
51
+ * Natural Language Understanding: Ideal for applications requiring human-like text understanding and generation, such as chatbots, virtual assistants, and content generation tools.
52
+ * Compact Deployment: Suitable for environments with limited computational resources due to its small size and 4-bit quantization.
53
+
54
+
55
+ ### Downstream Use [optional]
56
+
57
+ * Educational Tools: Fine-tune the model on educational datasets to provide detailed explanations and reasoning for academic subjects.
58
+ * Customer Support: Fine-tune on customer service interactions to enhance automated support systems with accurate and context-aware responses.
59
+
60
+
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ ### Limitations
65
+
66
+ **Predacon/Pico-R1-1.5b** is a compact model designed for efficiency, but it comes with certain limitations:
67
+
68
+ 3. **Limited Context Understanding**:
69
+ - With a smaller parameter size, the model may have limitations in understanding and generating contextually rich and nuanced responses compared to larger models.
70
+
71
+ 4. **Bias and Fairness**:
72
+ - Like all language models, Predacon/Pico-R1-1.5b may exhibit biases present in the training data. Users should be cautious of potential biases in the generated outputs.
73
+
74
+ 5. **Resource Constraints**:
75
+ - While the model is designed to be efficient, it still requires a GPU for optimal performance. Users with limited computational resources may experience slower inference times.
76
+
77
+
78
+ ### Example Usage:
79
+ ```python
80
+ import predacons
81
+
82
+ # Load the model and tokenizer
83
+ model_path = "Predacon/Pico-R1-1.5b"
84
+ model = predacons.load_model(model_path)
85
+ tokenizer = predacons.load_tokenizer(model_path)
86
+
87
+ # Example usage
88
+ chat = [
89
+ {"role": "user", "content": "A train travelling at a speed of 60 km/hr is stopped in 15 seconds by applying the brakes. Determine its retardation."},
90
+ ]
91
+ res = predacons.chat_generate(model = model,
92
+ sequence = chat,
93
+ max_length = 5000,
94
+ tokenizer = tokenizer,
95
+ trust_remote_code = True,
96
+ do_sample=True,
97
+
98
+ )
99
+
100
+ print(res)
101
+ ```
102
+
103
+ This example demonstrates how to load the `Predacon/Pico-R1-1.5b` model and use it to generate an explanation for a given query, keeping in mind the limitations mentioned above.
104
+
105
+
106
+
107
+ ## Model Card Authors [optional]
108
+
109
+ [Shourya Shashank](https://huggingface.co/shouryashashank)