Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -25,11 +25,11 @@ Here, we removed the visual components of qwen2.5-vl and merged all LoRA adapter
|
|
25 |
|
26 |
All models above provide F16, Q8_0, Q6_K, Q5_K_M, Q4_K_M, Q3_K_M quantizations. More quantizations such as Unsloth-like dynamic quantizations are on the way.
|
27 |
|
28 |
-
### Limitations
|
29 |
- They can not handle image input.
|
30 |
- They can not output multi-vector embeddings.
|
31 |
-
-
|
32 |
-
|
33 |
## Multimodal Task-Specific Models
|
34 |
|
35 |
TBA
|
@@ -78,7 +78,16 @@ curl -X POST "http://127.0.0.1:8080/v1/embeddings" \
|
|
78 |
}'
|
79 |
```
|
80 |
|
81 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
|
83 |
To get fully consistent results as if you were [using `AutoModel.from_pretrained("jinaai/jina-embeddings-v4")...`](https://huggingface.co/jinaai/jina-embeddings-v4#usage), you need to be **very careful** about the prefixes and manually add them to your GGUF model inputs. Here's a reference table:
|
84 |
|
@@ -95,10 +104,7 @@ To get fully consistent results as if you were [using `AutoModel.from_pretrained
|
|
95 |
|
96 |
To some users, ⚠️ indicates a somewhat surprising behavior where `prompt_name='passage'` gets overridden to `"Query: "` when using `text-matching` in the original `AutoModel.from_pretrained("jinaai/jina-embeddings-v4")....` However, this is reasonable since `text-matching` is a sentence similarity task with no left/right roles—the inputs are symmetric.
|
97 |
|
98 |
-
You can also use `llama-embedding` for one-shot embedding:
|
99 |
|
100 |
-
|
101 |
-
llama-embedding -hf jinaai/jina-embeddings-v4-text-matching-GGUF:F16 --pooling mean -p "jina is awesome" 2>/dev/null
|
102 |
-
```
|
103 |
|
104 |
Note, v4 is trained with Matryoshka embeddings, and converting to GGUF doesn't break the Matryoshka feature. Let's say you get embeddings with shape `NxD` - you can simply use `embeddings[:, :truncate_dim]` to get smaller truncated embeddings. Note that not every dimension is trained though. For v4, you can set `truncate_dim` to any of these values: `[128, 256, 512, 1024, 2048]`.
|
|
|
25 |
|
26 |
All models above provide F16, Q8_0, Q6_K, Q5_K_M, Q4_K_M, Q3_K_M quantizations. More quantizations such as Unsloth-like dynamic quantizations are on the way.
|
27 |
|
28 |
+
### Limitations vs original v4 model
|
29 |
- They can not handle image input.
|
30 |
- They can not output multi-vector embeddings.
|
31 |
+
- You must add `Query: ` or `Passage: ` in front of the input. [Check this table for the details](#consistency-wrt-automodelfrom_pretrained).
|
32 |
+
|
33 |
## Multimodal Task-Specific Models
|
34 |
|
35 |
TBA
|
|
|
78 |
}'
|
79 |
```
|
80 |
|
81 |
+
|
82 |
+
You can also use `llama-embedding` for one-shot embedding:
|
83 |
+
|
84 |
+
```bash
|
85 |
+
llama-embedding -hf jinaai/jina-embeddings-v4-text-matching-GGUF:F16 --pooling mean -p "Query: jina is awesome" --embd-output-format json 2>/dev/null
|
86 |
+
```
|
87 |
+
|
88 |
+
## Remarks
|
89 |
+
|
90 |
+
### Consistency wrt. `AutoModel.from_pretrained`
|
91 |
|
92 |
To get fully consistent results as if you were [using `AutoModel.from_pretrained("jinaai/jina-embeddings-v4")...`](https://huggingface.co/jinaai/jina-embeddings-v4#usage), you need to be **very careful** about the prefixes and manually add them to your GGUF model inputs. Here's a reference table:
|
93 |
|
|
|
104 |
|
105 |
To some users, ⚠️ indicates a somewhat surprising behavior where `prompt_name='passage'` gets overridden to `"Query: "` when using `text-matching` in the original `AutoModel.from_pretrained("jinaai/jina-embeddings-v4")....` However, this is reasonable since `text-matching` is a sentence similarity task with no left/right roles—the inputs are symmetric.
|
106 |
|
|
|
107 |
|
108 |
+
### Matryoshka embeddings
|
|
|
|
|
109 |
|
110 |
Note, v4 is trained with Matryoshka embeddings, and converting to GGUF doesn't break the Matryoshka feature. Let's say you get embeddings with shape `NxD` - you can simply use `embeddings[:, :truncate_dim]` to get smaller truncated embeddings. Note that not every dimension is trained though. For v4, you can set `truncate_dim` to any of these values: `[128, 256, 512, 1024, 2048]`.
|