iperbole commited on
Commit
891190d
·
verified ·
1 Parent(s): f2d2273

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -16,7 +16,7 @@ library_name: transformers
16
 
17
  The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
18
 
19
- *Mistral-v0.1-Italian-LAPT* is a continual trained mistral model.
20
 
21
  **Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
22
 
@@ -24,7 +24,7 @@ The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a
24
 
25
  ## Data used for the adaptation
26
 
27
- The **Mistral-7B-v0.1-Adapted** model are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
28
  The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.
29
 
30
 
@@ -32,7 +32,7 @@ The data are extracted to be skewed toward Italian language with a ration of one
32
 
33
  You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
34
 
35
- Make sure to update your transformers installation via pip install --upgrade transformers.
36
 
37
  ```python
38
  import transformers
@@ -47,6 +47,8 @@ pipeline = transformers.pipeline(
47
  pipeline("Cosa si può fare in una bella giornata di sole?")
48
  ```
49
 
 
 
50
  ## Citation
51
 
52
  If you use any part of this work, please consider citing the paper as follows:
 
16
 
17
  The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
18
 
19
+ Mistral-v0.1-Italian-FVT is a continually trained Mistral model.
20
 
21
  **Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
22
 
 
24
 
25
  ## Data used for the adaptation
26
 
27
+ The **Mistral-7B-v0.1-Adapted** models are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
28
  The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.
29
 
30
 
 
32
 
33
  You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
34
 
35
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
36
 
37
  ```python
38
  import transformers
 
47
  pipeline("Cosa si può fare in una bella giornata di sole?")
48
  ```
49
 
50
+ Code: https://github.com/SapienzaNLP/sava
51
+
52
  ## Citation
53
 
54
  If you use any part of this work, please consider citing the paper as follows: