metadata
library_name: transformers
license: apache-2.0
datasets:
- aimagelab/ReT-M2KR
base_model:
- laion/CLIP-ViT-H-14-laion2B-s32B-b79K
pipeline_tag: visual-document-retrieval
Model Card: ReT-2
Official implementation of ReT-2: Recurrence Meets Transformers for Universal Multimodal Retrieval.
This model features visual and textual backbones based on laion/CLIP-ViT-H-14-laion2B-s32B-b79K.
The backbones have been fine-tuned on the M2KR dataset.
Model Sources
- Repository: https://github.com/aimagelab/ReT-2
- Paper: Recurrence Meets Transformers for Universal Multimodal Retrieval
Training Data
Citation
@article{caffagni2025recurrencemeetstransformers,
title={{Recurrence Meets Transformers for Universal Multimodal Retrieval}},
author={Davide Caffagni and Sara Sarto and Marcella Cornia and Lorenzo Baraldi and Rita Cucchiara},
journal={arXiv preprint arXiv:2509.08897},
year={2025}
}