hanxiao commited on
Commit
a877d94
·
verified ·
1 Parent(s): 2e9d33b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model:
4
+ - jinaai/jina-embeddings-v4
5
+ base_model_relation: quantized
6
+ ---
7
+
8
+ # jina-embeddings-v4-gguf
9
+
10
+ A collection of GGUF and quantizations for [`jina-embeddings-v4`](https://huggingface.co/jinaai/jina-embeddings-v4).
11
+
12
+ ## Overview
13
+
14
+ `jina-embeddings-v4` is a cutting-edge universal embedding model [for multimodal multilingual retrieval](https://jina.ai/news/jina-embeddings-v4-universal-embeddings-for-multimodal-multilingual-retrieval). It's based on `qwen2.5-vl-3b-instruct` with three LoRA adapters: `retrieval` (optimized for retrieval tasks), `text-matching` (optimized for sentence similarity tasks), and `code` (optimized for code retrieval tasks). It is also heavily trained for visual document retrieval and late-interaction style multi-vector output.
15
+
16
+ ## Text-Only Task-Specific Models
17
+
18
+ Here, we removed the visual components of qwen2.5-vl and merged all LoRA adapters back into the base language model. This results in three task-specific v4 models with 3.09B parameters, downsized from the original jina-embeddings-v4 3.75B parameters:
19
+
20
+ | HuggingFace Repo | Task |
21
+ |---|---|
22
+ | [`jina-embeddings-v4-text-retrieval-GGUF`](https://huggingface.co/jinaai/jina-embeddings-v4-text-retrieval-GGUF) | Text retrieval |
23
+ | [`jina-embeddings-v4-text-code-GGUF`](https://huggingface.co/jinaai/jina-embeddings-v4-text-code-GGUF) | Code retrieval |
24
+ | [`jina-embeddings-v4-text-matching-GGUF`](https://huggingface.co/jinaai/jina-embeddings-v4-text-matching-GGUF) | Sentence similarity |
25
+
26
+ All models above provide F16, Q8_0, Q6_K, Q5_K_M, Q4_K_M, Q3_K_M quantizations.
27
+
28
+ ### Limitations
29
+ - They can not handle image input.
30
+ - They can not output multi-vector embeddings.
31
+ - When using retrieval and code models, you must add `Query: ` or `Passage: ` in front of the input. This ensure the query and retrieval targets are correctly embedded into the correct space.
32
+
33
+ ## Multimodal Task-Specific Models
34
+
35
+ TBA
36
+
37
+ ## Getting Embeddings
38
+
39
+ First [install llama.cpp](https://github.com/ggml-org/llama.cpp/blob/master/docs/install.md).
40
+
41
+ Run `llama-server` to host the embedding model as an HTTP server:
42
+
43
+ ```bash
44
+ llama-server -m jina-embeddings-v4-text-matching-F16.gguf --embedding --pooling mean
45
+ ```
46
+
47
+ Client:
48
+
49
+ ```bash
50
+ curl -X POST "http://127.0.0.1:8080/v1/embeddings" \
51
+ -H "Content-Type: application/json" \
52
+ -d '{
53
+ "input": [
54
+ "A beautiful sunset over the beach",
55
+ "Un beau coucher de soleil sur la plage",
56
+ "海滩上美丽的日落",
57
+ "浜辺に沈む美しい夕日"
58
+ ]
59
+ }'
60
+ ```
61
+
62
+ Note: When using `retrieval` and `code` models, add `Query: ` or `Passage:` in front of your input, like this:
63
+
64
+ ```bash
65
+ curl -X POST "http://127.0.0.1:8080/v1/embeddings" \
66
+ -H "Content-Type: application/json" \
67
+ -d '{
68
+ "input": [
69
+ "Query: A beautiful sunset over the beach",
70
+ "Query: Un beau coucher de soleil sur la plage",
71
+ "Query: 海滩上美丽的日落",
72
+ "Query: 浜辺に沈む美しい夕日"
73
+ ]
74
+ }'
75
+ ```
76
+
77
+ You can also use `llama-embedding` for one-shot embedding:
78
+
79
+ ```bash
80
+ llama-embedding -m jina-embeddings-v4-text-matching-F16.gguf --pooling mean -p "jina is awesome" 2>/dev/null
81
+ ```