VivekMalipatel23 commited on
Commit
4161826
·
verified ·
1 Parent(s): 508233f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -1
README.md CHANGED
@@ -14,4 +14,163 @@ tags:
14
  - multimodal_embedding
15
  - multilingual_embedding
16
  - Text-to-Visual Document (T→VD) retrieval
17
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  - multimodal_embedding
15
  - multilingual_embedding
16
  - Text-to-Visual Document (T→VD) retrieval
17
+ ---
18
+
19
+ # Nomic Embed Multimodal 3B: State-of-the-Art Visual Document Retrieval
20
+
21
+ `nomic-embed-multimodal-3b` is a dense state-of-the-art multimodal embedding model that excels at visual document retrieval tasks:
22
+
23
+ - **High Performance**: Achieves 58.8 NDCG@5 on Vidore-v2, outperforming all other similarly sized dense multimodal embedding models.
24
+ - **Unified Text-Image Encoding**: Directly encodes interleaved text and images without complex preprocessing
25
+ - **Advanced Architecture**: 3B parameter multimodal embedding model
26
+ - **Open Weights**: Model weights available for research use
27
+
28
+ ## Performance
29
+
30
+
31
+ | Model | Avg. | ESG Restaurant Human | Econ Macro Multi. | AXA Multi. | MIT Bio | ESG Restaurant Synth. | ESG Restaurant Synth. Multi. | MIT Bio Multi. | AXA | Econ. Macro |
32
+ |-------|------|----------------------|-------------------|------------|---------|----------------------|----------------------------|---------------|-----|------------|
33
+ | [ColNomic Embed Multimodal 7B](https://huggingface.co/nomic-ai/colnomic-embed-multimodal-7b) | 62.7 | 73.9 | 54.7 | 61.3 | 66.1 | 57.3 | 56.7 | 64.2 | 68.3 | 61.6 |
34
+ | [ColNomic Embed Multimodal 3B](https://huggingface.co/nomic-ai/colnomic-embed-multimodal-3b) | 61.2 | 65.8 | 55.4 | 61.0 | 63.5 | 56.6 | 57.2 | 62.5 | 68.8 | 60.2 |
35
+ | T-Systems ColQwen2.5-3B | 59.9 | 72.1 | 51.2 | 60.0 | 65.3 | 51.7 | 53.3 | 61.7 | 69.3 | 54.8 |
36
+ | [Nomic Embed Multimodal 7B](https://huggingface.co/nomic-ai/nomic-embed-multimodal-7b) | 59.7 | 65.7 | 57.7 | 59.3 | 64.0 | 49.2 | 51.9 | 61.2 | 66.3 | 63.1 |
37
+ | GME Qwen2 7B | 59.0 | 65.8 | 56.2 | 55.4 | 64.0 | 54.3 | 56.7 | 55.1 | 60.7 | 62.9 |
38
+ | **Nomic Embed Multimodal 3B** | 58.8 | 59.8 | 57.5 | 58.8 | 62.5 | 49.4 | 49.4 | 58.6 | 69.6 | 63.5 |
39
+ | Llama Index vdr-2b-multi-v1 | 58.4 | 63.1 | 52.8 | 61.0 | 60.6 | 50.3 | 51.2 | 56.9 | 68.8 | 61.2 |
40
+ | Voyage Multimodal 3 | 55.0 | 56.1 | 55.0 | 59.5 | 56.4 | 47.2 | 46.2 | 51.5 | 64.1 | 58.8 |
41
+
42
+
43
+ ## Getting Started
44
+
45
+ To use `nomic-embed-multimodal-3b`, please install `colpali` from source
46
+
47
+ ```bash
48
+ pip install git+https://github.com/illuin-tech/colpali.git
49
+ ```
50
+
51
+
52
+ ```python
53
+ import torch
54
+ from PIL import Image
55
+ from transformers.utils.import_utils import is_flash_attn_2_available
56
+
57
+ from colpali_engine.models import BiQwen2_5, BiQwen2_5_Processor
58
+
59
+ model_name = "nomic-ai/nomic-embed-multimodal-3b"
60
+
61
+ model = BiQwen2_5.from_pretrained(
62
+ model_name,
63
+ torch_dtype=torch.bfloat16,
64
+ device_map="cuda:0", # or "mps" if on Apple Silicon
65
+ attn_implementation="flash_attention_2" if is_flash_attn_2_available() else None,
66
+ ).eval()
67
+
68
+ processor = BiQwen2_5_Processor.from_pretrained(model_name)
69
+
70
+ # Your inputs
71
+ images = [
72
+ Image.new("RGB", (128, 128), color="white"),
73
+ Image.new("RGB", (64, 32), color="black"),
74
+ ]
75
+ queries = [
76
+ "What is the organizational structure for our R&D department?",
77
+ "Can you provide a breakdown of last year’s financial performance?",
78
+ ]
79
+
80
+ # Process the inputs
81
+ batch_images = processor.process_images(images).to(model.device)
82
+ batch_queries = processor.process_queries(queries).to(model.device)
83
+
84
+ # Forward pass
85
+ with torch.no_grad():
86
+ image_embeddings = model(**batch_images)
87
+ query_embeddings = model(**batch_queries)
88
+
89
+ scores = processor.score(list(torch.unbind(query_embeddings)), list(torch.unbind(image_embeddings)))
90
+ ```
91
+
92
+ ## Model Architecture
93
+
94
+ - **Total Parameters**: 3B
95
+ - **Training Approach**: Fine-tuned from Qwen2.5-VL 3B Instruct
96
+ - **Architecture Type**: Vision-Language Model with unified text and image input processing
97
+ - **Key Innovations**:
98
+ - Same-source sampling to create harder in-batch negatives
99
+ - Hard negative mining with positive-aware techniques
100
+
101
+ ## Integration with RAG Workflows
102
+
103
+ Nomic Embed Multimodal 3B seamlessly integrates with Retrieval Augmented Generation (RAG) workflows:
104
+
105
+ 1. **Direct Document Embedding**: Skip OCR and complex processing by directly embedding document page images
106
+ 2. **Faster Processing**: Eliminate preprocessing steps for quicker indexing
107
+ 3. **More Complete Information**: Capture both textual and visual cues in a single embedding
108
+ 4. **Simple Implementation**: Use the same API for both text and images
109
+
110
+ ## Recommended Use Cases
111
+
112
+ The model excels at handling real-world document retrieval scenarios that challenge traditional text-only systems:
113
+
114
+ - **Research Papers**: Capture equations, diagrams, and tables
115
+ - **Technical Documentation**: Encode code blocks, flowcharts, and screenshots
116
+ - **Product Catalogs**: Represent images, specifications, and pricing tables
117
+ - **Financial Reports**: Embed charts, graphs, and numerical data
118
+ - **Visually Rich Content**: Where layout and visual information are important
119
+ - **Multilingual Documents**: Where visual context provides important cues
120
+
121
+ ## Training Details
122
+
123
+ Nomic Embed Multimodal 3B was developed through several key innovations:
124
+
125
+ 1. **Sampling From the Same Source**: Forcing sampling from the same dataset source creates harder in-batch negatives, preventing the model from learning dataset artifacts.
126
+
127
+ 2. **Hard Negative Mining**: Using an initial model to retrieve top-k nearest neighbors for each query, then incorporating these hard negatives into training.
128
+
129
+ 3. **Positive-aware Hard Negative Mining**: Reducing false negatives using techniques introduced in NV-Retriever.
130
+
131
+
132
+ ## Limitations
133
+
134
+ - Performance may vary when processing documents with unconventional layouts or unusual visual elements
135
+ - While it handles multiple languages, performance is strongest on English content
136
+ - Processing very large or complex documents may require dividing them into smaller chunks
137
+ - Performance on documents with handwriting or heavily stylized fonts may be reduced
138
+
139
+ ## Join the Nomic Community
140
+
141
+ - Nomic Embed Ecosystem: [https://www.nomic.ai/embed](https://www.nomic.ai/embed)
142
+ - Website: [https://nomic.ai](https://nomic.ai)
143
+ - Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
144
+ - Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
145
+
146
+ ## Citation
147
+
148
+ If you find this model useful in your research or applications, please consider citing:
149
+
150
+ ```bibtex
151
+ @misc{faysse2024colpaliefficientdocumentretrieval,
152
+ title={ColPali: Efficient Document Retrieval with Vision Language Models},
153
+ author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
154
+ year={2024},
155
+ eprint={2407.01449},
156
+ archivePrefix={arXiv},
157
+ primaryClass={cs.IR},
158
+ url={https://arxiv.org/abs/2407.01449},
159
+ }
160
+ @misc{ma2024unifyingmultimodalretrievaldocument,
161
+ title={Unifying Multimodal Retrieval via Document Screenshot Embedding},
162
+ author={Xueguang Ma and Sheng-Chieh Lin and Minghan Li and Wenhu Chen and Jimmy Lin},
163
+ year={2024},
164
+ eprint={2406.11251},
165
+ archivePrefix={arXiv},
166
+ primaryClass={cs.IR},
167
+ url={https://arxiv.org/abs/2406.11251},
168
+ }
169
+ @misc{nomicembedmultimodal2025,
170
+ title={Nomic Embed Multimodal: Interleaved Text, Image, and Screenshots for Visual Document Retrieval},
171
+ author={Nomic Team},
172
+ year={2025},
173
+ publisher={Nomic AI},
174
+ url={https://nomic.ai/blog/posts/nomic-embed-multimodal},
175
+ }
176
+ ```