Goekdeniz-Guelmez commited on
Commit
b9e0917
·
verified ·
1 Parent(s): 5fae211

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +364 -0
README.md ADDED
@@ -0,0 +1,364 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Nikity/Kyoto-Corpus
5
+ language:
6
+ - en
7
+ base_model:
8
+ - Nikity/lille-130m-base
9
+ base_model_relation: finetune
10
+ model-index:
11
+ - name: lille-130m-instruct
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ dataset:
16
+ name: arc_challenge
17
+ type: arc_challenge
18
+ metrics:
19
+ - name: ARC (Challenge)
20
+ type: Accuracy
21
+ value: 15.05
22
+ - task:
23
+ type: text-generation
24
+ dataset:
25
+ name: arc_easy
26
+ type: arc_easy
27
+ metrics:
28
+ - name: ARC (Easy)
29
+ type: Accuracy
30
+ value: 21.4
31
+ - task:
32
+ type: text-generation
33
+ dataset:
34
+ name: gpqa
35
+ type: gpqa
36
+ metrics:
37
+ - name: GPQA
38
+ type: Accuracy
39
+ value: 12.73
40
+ - task:
41
+ type: text-generation
42
+ dataset:
43
+ name: gsm8k
44
+ type: gsm8k
45
+ metrics:
46
+ - name: GSM8K
47
+ type: Accuracy
48
+ value: 7.73
49
+ - task:
50
+ type: text-generation
51
+ dataset:
52
+ name: ifeval
53
+ type: ifeval
54
+ metrics:
55
+ - name: IFEVAL
56
+ type: Accuracy
57
+ value: 9.01
58
+ - task:
59
+ type: text-generation
60
+ dataset:
61
+ name: math
62
+ type: math
63
+ metrics:
64
+ - name: MATH (Level 5)
65
+ type: Accuracy
66
+ value: 1.91
67
+ - task:
68
+ type: text-generation
69
+ dataset:
70
+ name: mmlu
71
+ type: mmlu
72
+ metrics:
73
+ - name: MMLU
74
+ type: Accuracy
75
+ value: 22.76
76
+ - task:
77
+ type: text-generation
78
+ dataset:
79
+ name: mt_bench
80
+ type: mt_bench
81
+ metrics:
82
+ - name: MT-Bench
83
+ type: Accuracy
84
+ value: 8.2
85
+ - task:
86
+ type: text-generation
87
+ dataset:
88
+ name: truthful_qa
89
+ type: truthful_qa
90
+ metrics:
91
+ - name: TruthfulQA
92
+ type: Accuracy
93
+ value: 9.06
94
+ pipeline_tag: text-generation
95
+ ---
96
+
97
+ # Lille 130M Instruct
98
+
99
+ ![Lille-Header](assets/lille-header.png)
100
+
101
+ > **You are currently viewing the `lille-130m-instruct` model card.**
102
+ >
103
+ > View the base model here: **[Nikity/lille-130m-base](https://huggingface.co/Nikity/lille-130m-base)**
104
+
105
+ ## Table of Contents
106
+ 1. [Model Summary](#-model-summary)
107
+ 2. [Evaluation](#-evaluation)
108
+ 3. [How to Use](#-how-to-use)
109
+ 4. [Training and Finetuning](#-training-and-finetuning)
110
+ 5. [Training Details](#-training-details)
111
+ 6. [Limitations](#-limitations)
112
+ 7. [The Truly Open-Source Stack](#-the-truly-open-source-repos)
113
+ 8. [License](#-license)
114
+ 9. [Citation](#-citation)
115
+
116
+ ## ✨ Model Summary
117
+
118
+ **Lille** is a 130-million-parameter language model built from the ground up as a core component of a completely open-source deep learning stack. The name Lille reflects both its compact size and strong capabilities - capturing the idea that less can be more. It draws on the Norwegian word lille (‘small’ or ‘little’) as well as the French city Lille, giving it both meaning and place. It was trained using a custom tokenizer, a curated dataset, and a memory-efficient optimizer, all of which are publicly available.
119
+
120
+ The model comes in two versions:
121
+ * **`Lille-130M-Base`**: The foundational model pretrained on 4.27 billion of tokens from the [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) dataset. A post-processing step to only include the highest quality of content was added. It has strong general knowledge and text completion abilities.
122
+ * **`Lille-130M-Instruct`**: The instruction-tuned version, fine-tuned on the **[Kyoto-Corpus](https://huggingface.co/datasets/Nikity/Kyoto-Corpus)**. It excels at following user commands, engaging in chat, and performing a variety of instruction-based tasks.
123
+
124
+ The model architecture is a modern Transformer decoder featuring Grouped-Query Attention (GQA), RoPE, and RMSNorm, making it efficient and performant for its size.
125
+
126
+ *Note on parameter count: While the model name is `130M` for simplicity, the actual parameter count is 127.17 million.*
127
+
128
+ ## 📊 Evaluation
129
+
130
+ All evaluations were conducted using **[simple-eval](https://github.com/Nikityyy/simple-eval)**, our open-source evaluation framework. Benchmarks are run in a zero-shot setting unless specified otherwise.
131
+
132
+ #### `Lille-130M-Instruct`
133
+
134
+ ![Evaluations](assets/evaluations.png)
135
+
136
+ > Evaluations for other LLMs are sourced from the <a href="https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard">Open LLM Leaderboard</a> or their respective model cards when benchmark data is unavailable. For Lille 130M Instruct, evaluations are performed using <a href="https://github.com/Nikityyy/simple-eval">simple-eval</a>. ARC-C and ARC-E for Smollm2 are also evaluated using <a href="https://github.com/Nikityyy/simple-eval">simple-eval</a>.
137
+
138
+ ## 🚀 How to Use
139
+
140
+ There are several ways to use the Lille models, from easy-to-use graphical interfaces to advanced programmatic control.
141
+
142
+ ### 1. LM Studio (Easiest for Chat)
143
+
144
+ LM Studio provides a simple graphical interface to run LLMs on your local machine. It's the easiest way to start chatting with Lille.
145
+
146
+ 1. **Download & Install:** Get [LM Studio](https://lmstudio.ai/) for your operating system (Windows, Mac, or Linux).
147
+ 2. **Search for the Model:** Open LM Studio and click the **magnifying glass** icon on the left.
148
+ 3. **Find Lille:** In the search bar, type `Lille` or `Nikity`. You will find the models I have uploaded.
149
+ 4. **Download a GGUF:** On the right-hand side, you'll see a list of GGUF files. Download a recommended version like `lille-130m-instruct-f16.gguf`.
150
+ 5. **Chat:** Click the **speech bubble** icon on the left. At the top, select the model you just downloaded. Now you can start a conversation!
151
+
152
+ ### 2. SimpleAI SDK (Recommended for Programmatic Use)
153
+
154
+ The easiest way to use Lille programmatically is with the `simpleai-sdk`, which handles all the boilerplate for you and provides a simple, high-level API for both Hugging Face and ONNX backends.
155
+
156
+ ```bash
157
+ pip install simpleai-sdk
158
+ ```
159
+
160
+ ```python
161
+ from simple_ai import lille
162
+
163
+ # This will download and cache the model on first run.
164
+ # Specify the model version: "130m-instruct" (default) or "130m-base"
165
+ # Specify the backend: "huggingface" (default) or "onnx"
166
+ model = lille("huggingface", "130m-instruct")
167
+
168
+ # --- For Chat (with instruct model) ---
169
+ print("--- Chat Example ---")
170
+ response1 = model.chat("What is the capital of France?", max_new_tokens=50)
171
+ print(f"Bot: {response1}")
172
+
173
+ response2 = model.chat("And what is its population?", max_new_tokens=50, top_p=0.90)
174
+ print(f"Bot: {response2}")
175
+
176
+ # This resets the chat history
177
+ model.reset_chat()
178
+
179
+ # --- For Text Completion (with base or instruct model) ---
180
+ prompt = "Artificial Intelligence is"
181
+ response = model.generate(prompt, max_new_tokens=50, temperature=0.9)
182
+ print(f"\n--- Completion Example ---\n{prompt}{response}")
183
+ ```
184
+
185
+ ### 3. Standard Hugging Face Transformers (this also needs `simpleai-sdk` currently)
186
+
187
+ You can also use the model directly with the `transformers` library for more advanced use cases.
188
+
189
+ ```bash
190
+ pip install transformers torch simpleai-sdk
191
+ ```
192
+
193
+ ```python
194
+ import torch
195
+ from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM
196
+ from simple_ai.model_hf import LilleConfig, LilleForCausalLM
197
+
198
+ # 1. Register the custom model architecture with Hugging Face
199
+ AutoConfig.register("lille-130m", LilleConfig)
200
+ AutoModelForCausalLM.register(LilleConfig, LilleForCausalLM)
201
+
202
+ # 2. Define constants and setup device
203
+ MODEL = "Nikity/lille-130m-instruct"
204
+ DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
205
+
206
+ # 3. Load tokenizer and model
207
+ tokenizer = AutoTokenizer.from_pretrained(MODEL)
208
+ model = AutoModelForCausalLM.from_pretrained(
209
+ MODEL,
210
+ torch_dtype="auto",
211
+ device_map=DEVICE,
212
+ )
213
+
214
+ # 4. Prepare chat prompt and tokenize it
215
+ chat = [{"role": "user", "content": "What is the capital of France?"}]
216
+ inputs = tokenizer.apply_chat_template(
217
+ chat,
218
+ add_generation_prompt=True,
219
+ return_tensors="pt"
220
+ ).to(DEVICE)
221
+
222
+ # 5. Generate a response
223
+ with torch.inference_mode():
224
+ outputs = model.generate(
225
+ input_ids=inputs,
226
+ max_new_tokens=512,
227
+ eos_token_id=tokenizer.eos_token_id,
228
+ pad_token_id=tokenizer.pad_token_id,
229
+ do_sample=True,
230
+ temperature=0.5,
231
+ top_p=0.95,
232
+ )
233
+
234
+ # 6. Decode and print the response
235
+ response = tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True)
236
+ print(response)
237
+ ```
238
+
239
+ ## 🚀 Training and Finetuning
240
+
241
+ You can replicate the pretraining of `Lille-130M-Base` or fine-tune it on your own dataset using the provided scripts.
242
+
243
+ #### 1. Setup
244
+
245
+ First, clone the repository and install the required dependencies:
246
+
247
+ ```bash
248
+ git clone https://github.com/Nikityyy/lille
249
+ cd lille
250
+ pip install -r requirements.txt
251
+ ```
252
+
253
+ **Note on the Optimizer:** The default `Sophia-Triton` optimizer requires the [Triton](https://triton-lang.org/main/getting-started/installation.html) library. Triton is officially supported on Linux with NVIDIA GPUs. While experimental installation on Windows is possible, it can be a complex and difficult process. For a much simpler setup on **Windows and macOS**, or if you prefer not to install Triton, it is highly recommended to use a pure PyTorch implementation of Sophia instead:
254
+
255
+ 1. Replace the contents of the `sophia_triton.py` file with the code from [this link](https://github.com/Liuhong99/Sophia/blob/main/sophia.py).
256
+ 2. The `train.py` script should work without any import changes, as the class name `SophiaG` is the same.
257
+
258
+ #### 2. Data Preparation
259
+
260
+ The training script expects data in a specific `.npz` format containing tokenized documents and their offsets.
261
+
262
+ **For Pretraining (like FineWeb-Edu):**
263
+
264
+ Use the `prepare_dataset_fineweb.py` script. It will stream the dataset from Hugging Face, apply filters, tokenize the text, and save it in the required format.
265
+
266
+ ```bash
267
+ python prepare_dataset_fineweb.py
268
+ ```
269
+ This will create `data/fineweb_edu_sample_10BT/train.npz` and `val.npz`.
270
+
271
+ **For Finetuning (Instruction Datasets):**
272
+
273
+ Use the `prepare_dataset.py` script. Your input data should be a single `.txt` file where each example is separated by the `<|endoftext|>` token.
274
+
275
+ 1. Place your data file, for example, at `data/my_dataset/train.txt`.
276
+ 2. Modify the `input_file_path` and `output_dir` variables in `prepare_dataset.py`.
277
+ 3. Run the script:
278
+
279
+ ```bash
280
+ python prepare_dataset.py
281
+ ```
282
+ This will create `train.npz` and `val.npz` in your specified output directory.
283
+
284
+ #### 3. Running the Training Script
285
+
286
+ All training logic is handled by `train.py`. You can configure hyperparameters directly at the top of this file.
287
+
288
+ **To Pretrain from Scratch:**
289
+
290
+ 1. Ensure you have prepared a pretraining dataset.
291
+ 2. In `train.py`, set `finetune = False`.
292
+ 3. Configure pretraining parameters like `data_dir`, `batch_size`, etc.
293
+ 4. Run the script:
294
+
295
+ ```bash
296
+ python train.py
297
+ ```
298
+
299
+ **To Fine-tune a Pretrained Model:**
300
+
301
+ 1. Ensure you have prepared a fine-tuning dataset.
302
+ 2. In `train.py`, set `finetune = True`.
303
+ 3. Set `resume_checkpoint` to the path of the pretrained model checkpoint (e.g., `checkpoints/best_model.pt`).
304
+ 4. Configure fine-tuning parameters like `finetune_data_dir` and `finetune_learning_rate`.
305
+ 5. Run the script:
306
+
307
+ ```bash
308
+ python train.py
309
+ ```
310
+
311
+ Checkpoints will be saved in the directory specified by `out_dir` (for pretraining) or `finetune_out_dir` (for fine-tuning). The best model based on validation loss will be saved as `best_model.pt`.
312
+
313
+ ## 🛠️ Training Details
314
+
315
+ ### Pretraining (`Lille-130M-Base`)
316
+ * **Dataset:** Pretrained on **4.27 billion tokens** from the `sample-10BT` configuration of the [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) dataset.
317
+ * **Tokenizer:** The custom **[Hastings](https://github.com/Nikityyy/Hastings)** tokenizer with a 32,768 vocabulary size.
318
+ * **Optimizer:** The memory-efficient **[Sophia-Triton](https://github.com/Nikityyy/Sophia-Triton)** optimizer.
319
+ * **Hardware:** Trained on a single NVIDIA RTX 4070-TI.
320
+ * **Precision:** bfloat16.
321
+
322
+ ### Instruction Tuning (`Lille-130M-Instruct`)
323
+ * **Dataset:** Supervised Fine-Tuning (SFT) was performed on the **[Kyoto-Corpus](https://github.com/Nikityyy/Kyoto-Corpus)**, a high-quality, curated collection of conversational and instructional data.
324
+
325
+ ### Model Architecture
326
+ * **Type:** Transformer Decoder
327
+ * **Layers:** 24
328
+ * **Embedding Size:** 640
329
+ * **Attention Heads:** 10
330
+ * **KV Heads (GQA):** 2
331
+ * **Context Length:** 512 tokens
332
+
333
+ ## Limitations
334
+
335
+ Lille models primarily understand and generate content in English. While powerful for their size, they can produce text that may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
336
+
337
+ ## 🛠️ The truly open-source repos
338
+
339
+ Lille is a key component of my initiative to build and release a complete, truly open-source stack for language modeling. All components are designed to work together seamlessly.
340
+
341
+ * **Tokenizer:** **[Hastings](https://github.com/Nikityyy/Hastings)** - A modern, efficient tokenizer with a 32k vocabulary.
342
+ * **Dataset:** **[Kyoto-Corpus](https://github.com/Nikityyy/Kyoto-Corpus)** - A high-quality, small-scale dataset for instruction tuning.
343
+ * **Model:** **[lille](https://github.com/Nikityyy/lille)** (this model) - A powerful 130-million-parameter model trained from scratch.
344
+ * **Optimizer:** **[Sophia-Triton](https://github.com/Nikityyy/Sophia-Triton)** - A memory-efficient, Triton-based implementation of the SophiaG optimizer.
345
+ * **Evaluations:** **[simple-eval](https://github.com/Nikityyy/simple-eval)** - A straightforward framework for evaluating model performance using an LLM as a Judge.
346
+
347
+ ## 📜 License
348
+
349
+ This project is licensed under the Apache-2.0 License.
350
+
351
+ ## Citation
352
+
353
+ If you use Lille or any part of this open-source stack in your work, please consider citing it:
354
+
355
+ ```bibtex
356
+ @misc{lille-130m,
357
+ author = {Nikita Berger},
358
+ title = {Lille: A Truly Open-Source 130M Language Model},
359
+ year = {2025},
360
+ publisher = {GitHub},
361
+ journal = {GitHub repository},
362
+ howpublished = {\url{https://github.com/Nikityyy/lille}}
363
+ }
364
+ ```