Update README.md
Browse files
README.md
CHANGED
@@ -19,11 +19,11 @@ Here, we removed the visual components of qwen2.5-vl and merged all LoRA adapter
|
|
19 |
|
20 |
| HuggingFace Repo | Task |
|
21 |
|---|---|
|
22 |
-
| [`jina-embeddings-v4-text-retrieval-GGUF`](https://huggingface.co/jinaai/jina-embeddings-v4-text-retrieval-GGUF) | Text retrieval |
|
23 |
-
| [`jina-embeddings-v4-text-code-GGUF`](https://huggingface.co/jinaai/jina-embeddings-v4-text-code-GGUF) | Code retrieval |
|
24 |
-
| [`jina-embeddings-v4-text-matching-GGUF`](https://huggingface.co/jinaai/jina-embeddings-v4-text-matching-GGUF) | Sentence similarity |
|
25 |
|
26 |
-
All models above provide F16, Q8_0, Q6_K, Q5_K_M, Q4_K_M, Q3_K_M quantizations.
|
27 |
|
28 |
### Limitations
|
29 |
- They can not handle image input.
|
@@ -34,16 +34,20 @@ All models above provide F16, Q8_0, Q6_K, Q5_K_M, Q4_K_M, Q3_K_M quantizations.
|
|
34 |
|
35 |
TBA
|
36 |
|
37 |
-
##
|
38 |
|
39 |
First [install llama.cpp](https://github.com/ggml-org/llama.cpp/blob/master/docs/install.md).
|
40 |
|
41 |
-
Run `llama-server` to host the embedding model as
|
42 |
|
43 |
```bash
|
44 |
-
llama-server -
|
45 |
```
|
46 |
|
|
|
|
|
|
|
|
|
47 |
Client:
|
48 |
|
49 |
```bash
|
@@ -77,5 +81,5 @@ curl -X POST "http://127.0.0.1:8080/v1/embeddings" \
|
|
77 |
You can also use `llama-embedding` for one-shot embedding:
|
78 |
|
79 |
```bash
|
80 |
-
llama-embedding -
|
81 |
```
|
|
|
19 |
|
20 |
| HuggingFace Repo | Task |
|
21 |
|---|---|
|
22 |
+
| [`jinaai/jina-embeddings-v4-text-retrieval-GGUF`](https://huggingface.co/jinaai/jina-embeddings-v4-text-retrieval-GGUF) | Text retrieval |
|
23 |
+
| [`jinaai/jina-embeddings-v4-text-code-GGUF`](https://huggingface.co/jinaai/jina-embeddings-v4-text-code-GGUF) | Code retrieval |
|
24 |
+
| [`jinaai/jina-embeddings-v4-text-matching-GGUF`](https://huggingface.co/jinaai/jina-embeddings-v4-text-matching-GGUF) | Sentence similarity |
|
25 |
|
26 |
+
All models above provide F16, Q8_0, Q6_K, Q5_K_M, Q4_K_M, Q3_K_M quantizations. More quantizations such as Unsloth-like dynamic quantizations are on the way.
|
27 |
|
28 |
### Limitations
|
29 |
- They can not handle image input.
|
|
|
34 |
|
35 |
TBA
|
36 |
|
37 |
+
## Get Embeddings
|
38 |
|
39 |
First [install llama.cpp](https://github.com/ggml-org/llama.cpp/blob/master/docs/install.md).
|
40 |
|
41 |
+
Run `llama-server` to host the embedding model as OpenAI API compatible HTTP server. As an example for using `text-matching` with `F16`, you can do:
|
42 |
|
43 |
```bash
|
44 |
+
llama-server -hf jinaai/jina-embeddings-v4-text-matching-GGUF:F16 --embedding --pooling mean -ub 8192
|
45 |
```
|
46 |
|
47 |
+
Remarks:
|
48 |
+
- `--pooling mean` is required as v4 is mean-pooling embeddings.
|
49 |
+
- setting `--pooling none` is *not* as same as the multi-vector embeddings of v4. The original v4 has a trained MLP on top of the last hidden states to output multi-vector embeddings, each has 128-dim. In GGUF, this MLP was chopped off.
|
50 |
+
|
51 |
Client:
|
52 |
|
53 |
```bash
|
|
|
81 |
You can also use `llama-embedding` for one-shot embedding:
|
82 |
|
83 |
```bash
|
84 |
+
llama-embedding -hf jinaai/jina-embeddings-v4-text-matching-GGUF:F16 --pooling mean -p "jina is awesome" 2>/dev/null
|
85 |
```
|