juliehunter commited on
Commit
6c7ca97
·
verified ·
1 Parent(s): 8ac48c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +206 -4
README.md CHANGED
@@ -4,15 +4,17 @@ pipeline_tag: text-generation
4
  language:
5
  - fr
6
  - en
7
- - it
8
- - de
9
- - es
10
  tags:
11
  - pretrained
12
  - llama-3
13
  - openllm-france
14
  datasets:
15
- - OpenLLM-France/Lucie-Training-Dataset
 
 
 
 
 
16
  widget:
17
  - text: |-
18
  Quelle est la capitale de l'Espagne ? Madrid.
@@ -25,3 +27,203 @@ training_progress:
25
  context_length: 32000
26
  ---
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  language:
5
  - fr
6
  - en
 
 
 
7
  tags:
8
  - pretrained
9
  - llama-3
10
  - openllm-france
11
  datasets:
12
+ - yahma/alpaca-cleaned
13
+ - cmh/alpaca_data_cleaned_fr_52k
14
+ - Magpie-Align/Magpie-Gemma2-Pro-200K-Filtered
15
+ - allenai/WildChat-1M
16
+ base_model:
17
+ - OpenLLM-France/Lucie-7B
18
  widget:
19
  - text: |-
20
  Quelle est la capitale de l'Espagne ? Madrid.
 
27
  context_length: 32000
28
  ---
29
 
30
+
31
+ # Model Card for Lucie-7B-Instruct-v1.1
32
+
33
+ * [Model Description](#model-description)
34
+ <!-- * [Uses](#uses) -->
35
+ * [Training Details](#training-details)
36
+ * [Training Data](#training-data)
37
+ * [Preprocessing](#preprocessing)
38
+ * [Instruction template](#instruction-template)
39
+ * [Training Procedure](#training-procedure)
40
+ <!-- * [Evaluation](#evaluation) -->
41
+ * [Testing the model](#testing-the-model)
42
+ * [Test with ollama](#test-with-ollama)
43
+ * [Test with vLLM](#test-with-vllm)
44
+ * [Citation](#citation)
45
+ * [Acknowledgements](#acknowledgements)
46
+ * [Contact](#contact)
47
+
48
+ ## Model Description
49
+
50
+ Lucie-7B-Instruct-v1.1 is a fine-tuned version of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), an open-source, multilingual causal language model created by OpenLLM-France.
51
+
52
+ Lucie-7B-Instruct is fine-tuned on synthetic instructions produced by ChatGPT and Gemma and a small set of customized prompts about OpenLLM and Lucie. It is optimized for generation of French text. Note that it has not been trained for code generation or optimized for math. Such capacities can be improved through further fine-tuning and alignment with methods such as DPO, RLHF, etc.
53
+
54
+ While Lucie-7B-Instruct is trained on sequences of 4096 tokens, its base model, Lucie-7B has a context size of 32K tokens. Based on Needle-in-a-haystack evaluations, Lucie-7B-Instruct maintains the capacity of the base model to handle 32K-size context windows.
55
+
56
+
57
+ ## Training details
58
+
59
+ ### Training data
60
+
61
+ Lucie-7B-Instruct-v1.1 is trained on the following datasets:
62
+ * [Alpaca-cleaned-fr](https://huggingface.co/datasets/cmh/alpaca_data_cleaned_fr_52k) (French; 51655 samples)
63
+ * [Croissant-Aligned-Instruct](https://huggingface.co/datasets/OpenLLM-France/Croissant-Aligned-Instruct) (English-French; 20,000 samples taken from 80,000 total)
64
+ * [ENS](https://huggingface.co/datasets/Gael540/dataSet_ens_sup_fr-v1) (French, 394 samples)
65
+ * [FLAN v2 Converted](https://huggingface.co/datasets/ai2-adapt-dev/flan_v2_converted) (English, 78580 samples)
66
+ * [Open Hermes 2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) (English, 1000495 samples)
67
+ * [Oracle](https://github.com/opinionscience/InstructionFr/tree/main/wikipedia) (French, 4613 samples)
68
+ * [PIAF](https://www.data.gouv.fr/fr/datasets/piaf-le-dataset-francophone-de-questions-reponses/) (French, 1849 samples)
69
+ * [TULU3 Personas Math](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-math)
70
+ * [TULU3 Personas Math Grade](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-math-grade)
71
+ * [Wildchat](https://huggingface.co/datasets/allenai/WildChat-1M) (French subset; 26436 samples)
72
+ * Hard-coded prompts concerning OpenLLM and Lucie (based on [allenai/tulu-3-hard-coded-10x](https://huggingface.co/datasets/allenai/tulu-3-hard-coded-10x))
73
+ * French: openllm_french.jsonl (24x10 samples)
74
+ * English: openllm_english.jsonl (24x10 samples)
75
+
76
+ One epoch was passed on each dataset unless specified otherwise above.
77
+
78
+ ### Preprocessing
79
+ * Filtering by keyword: Examples containing assistant responses were filtered out from the four synthetic datasets if the responses contained a keyword from the list [filter_strings](https://github.com/OpenLLM-France/Lucie-Training/blob/98792a1a9015dcf613ff951b1ce6145ca8ecb174/tokenization/data.py#L2012). This filter is designed to remove examples in which the assistant is presented as model other than Lucie (e.g., ChatGPT, Gemma, Llama, ...).
80
+
81
+ ### Instruction template:
82
+ Lucie-7B-Instruct-v1.1 was trained on the chat template from Llama 3.1 with the sole difference that `<|begin_of_text|>` is replaced with `<s>`. The resulting template:
83
+
84
+ ```
85
+ <s><|start_header_id|>system<|end_header_id|>
86
+
87
+ {SYSTEM}<|eot_id|><|start_header_id|>user<|end_header_id|>
88
+
89
+ {INPUT}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
90
+
91
+ {OUTPUT}<|eot_id|>
92
+ ```
93
+
94
+
95
+ An example:
96
+
97
+
98
+ ```
99
+ <s><|start_header_id|>system<|end_header_id|>
100
+
101
+ You are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
102
+
103
+ Give me three tips for staying in shape.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
104
+
105
+ 1. Eat a balanced diet and be sure to include plenty of fruits and vegetables. \n2. Exercise regularly to keep your body active and strong. \n3. Get enough sleep and maintain a consistent sleep schedule.<|eot_id|>
106
+ ```
107
+
108
+ ### Training procedure
109
+
110
+ The model architecture and hyperparameters are the same as for [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B) during the annealing phase with the following exceptions:
111
+ * context length: 4096<sup>*</sup>
112
+ * batch size: 1024
113
+ * max learning rate: 3e-5
114
+ * min learning rate: 3e-6
115
+
116
+ <sup>*</sup>As noted above, while Lucie-7B-Instruct is trained on sequences of 4096 tokens, it maintains the capacity of the base model, Lucie-7B, to handle context sizes of up to 32K tokens.
117
+
118
+ ## Testing the model
119
+
120
+ ### Test with ollama
121
+
122
+ * Download and install [Ollama](https://ollama.com/download)
123
+ * Download the [GGUF model](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct-v1.1-gguf/blob/main/Lucie-7B-Instruct-v1.1-q4_k_m.gguf)
124
+ * Copy the [`Modelfile`](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct-v1.1-gguf/blob/main/Modelfile), adpating if necessary the path to the GGUF file (line starting with `FROM`).
125
+ * Run in a shell:
126
+ * `ollama create -f Modelfile Lucie`
127
+ * `ollama run Lucie`
128
+ * Once ">>>" appears, type your prompt(s) and press Enter.
129
+ * Optionally, restart a conversation by typing "`/clear`"
130
+ * End the session by typing "`/bye`".
131
+
132
+ Useful for debug:
133
+ * [How to print input requests and output responses in Ollama server?](https://stackoverflow.com/a/78831840)
134
+ * [Documentation on Modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#parameter)
135
+ * Examples: [Ollama model library](https://github.com/ollama/ollama#model-library)
136
+ * Llama 3 example: https://ollama.com/library/llama3.1
137
+ * Add GUI : https://docs.openwebui.com/
138
+
139
+ ### Test with vLLM
140
+
141
+ #### 1. Run vLLM Docker Container
142
+
143
+ Use the following command to deploy the model,
144
+ replacing `INSERT_YOUR_HF_TOKEN` with your Hugging Face Hub token.
145
+
146
+ ```bash
147
+ docker run --runtime nvidia --gpus=all \
148
+ --env "HUGGING_FACE_HUB_TOKEN=INSERT_YOUR_HF_TOKEN" \
149
+ -p 8000:8000 \
150
+ --ipc=host \
151
+ vllm/vllm-openai:latest \
152
+ --model OpenLLM-France/Lucie-7B-Instruct-v1.1
153
+ ```
154
+
155
+ #### 2. Test using OpenAI Client in Python
156
+
157
+ To test the deployed model, use the OpenAI Python client as follows:
158
+
159
+ ```python
160
+ from openai import OpenAI
161
+
162
+ # Initialize the client
163
+ client = OpenAI(base_url='http://localhost:8000/v1', api_key='empty')
164
+
165
+ # Define the input content
166
+ content = "Hello Lucie"
167
+
168
+ # Generate a response
169
+ chat_response = client.chat.completions.create(
170
+ model="OpenLLM-France/Lucie-7B-Instruct-v1.1",
171
+ messages=[
172
+ {"role": "user", "content": content}
173
+ ],
174
+ )
175
+ print(chat_response.choices[0].message.content)
176
+ ```
177
+
178
+ ## Citation
179
+
180
+ When using the Lucie-7B-Instruct model, please cite the following paper:
181
+
182
+ ✍ Olivier Gouvert, Julie Hunter, Jérôme Louradour, Christophe Cérisara,
183
+ Evan Dufraisse, Yaya Sy, Laura Rivière, Jean-Pierre Lorré (2025).
184
+ The Lucie-7B LLM and the Lucie Training Dataset:
185
+ open resources for multilingual language generation
186
+ ```bibtex
187
+ @misc{openllm2023claire,
188
+ title={The Lucie-7B LLM and the Lucie Training Dataset:
189
+ open resources for multilingual language generation},
190
+ author={Olivier Gouvert and Julie Hunter and Jérôme Louradour and Christophe Cérisara and Evan Dufraisse and Yaya Sy and Laura Rivière and Jean-Pierre Lorré},
191
+ year={2025},
192
+ archivePrefix={arXiv},
193
+ primaryClass={cs.CL}
194
+ }
195
+ ```
196
+
197
+
198
+ ## Acknowledgements
199
+
200
+ This work was performed using HPC resources from GENCI–IDRIS (Grant 2024-GC011015444). We gratefully acknowledge support from GENCI and IDRIS and from Pierre-François Lavallée (IDRIS) and Stephane Requena (GENCI) in particular.
201
+
202
+
203
+ Lucie-7B-Instruct-v1.1 was created by members of [LINAGORA](https://labs.linagora.com/) and the [OpenLLM-France](https://www.openllm-france.fr/) community, including in alphabetical order:
204
+ Olivier Gouvert (LINAGORA),
205
+ Ismaïl Harrando (LINAGORA/SciencesPo),
206
+ Julie Hunter (LINAGORA),
207
+ Jean-Pierre Lorré (LINAGORA),
208
+ Jérôme Louradour (LINAGORA),
209
+ Michel-Marie Maudet (LINAGORA), and
210
+ Laura Rivière (LINAGORA).
211
+
212
+
213
+ We thank
214
+ Clément Bénesse (Opsci),
215
+ Christophe Cerisara (LORIA),
216
+ Émile Hazard (Opsci),
217
+ Evan Dufraisse (CEA List),
218
+ Guokan Shang (MBZUAI),
219
+ Joël Gombin (Opsci),
220
+ Jordan Ricker (Opsci),
221
+ and
222
+ Olivier Ferret (CEA List)
223
+ for their helpful input.
224
+
225
+ Finally, we thank the entire OpenLLM-France community, whose members have helped in diverse ways.
226
+
227
+ ## Contact
228
+
229