Q2E: Query-to-Event Decomposition for Zero-Shot Multilingual Text-to-Video Retrieval
Abstract
Q2E, a Query-to-Event decomposition method, enhances text-to-video retrieval by leveraging latent parametric knowledge from LLMs and VLMs, and outperforms existing methods across datasets and metrics.
Recent approaches have shown impressive proficiency in extracting and leveraging parametric knowledge from Large-Language Models (LLMs) and Vision-Language Models (VLMs). In this work, we consider how we can improve the identification and retrieval of videos related to complex real-world events by automatically extracting latent parametric knowledge about those events. We present Q2E: a Query-to-Event decomposition method for zero-shot multilingual text-to-video retrieval, adaptable across datasets, domains, LLMs, or VLMs. Our approach demonstrates that we can enhance the understanding of otherwise overly simplified human queries by decomposing the query using the knowledge embedded in LLMs and VLMs. We additionally show how to apply our approach to both visual and speech-based inputs. To combine this varied multimodal knowledge, we adopt entropy-based fusion scoring for zero-shot fusion. Through evaluations on two diverse datasets and multiple retrieval metrics, we demonstrate that Q2E outperforms several state-of-the-art baselines. Our evaluation also shows that integrating audio information can significantly improve text-to-video retrieval. We have released code and data for future research.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GAID: Frame-Level Gated Audio-Visual Integration with Directional Perturbation for Text-Video Retrieval (2025)
- Zero-shot Multimodal Document Retrieval via Cross-modal Question Generation (2025)
- T2VParser: Adaptive Decomposition Tokens for Partial Alignment in Text to Video Retrieval (2025)
- VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents (2025)
- Context-Adaptive Multi-Prompt Embedding with Large Language Models for Vision-Language Alignment (2025)
- Repeating Words for Video-Language Retrieval with Coarse-to-Fine Objectives (2025)
- Bidirectional Likelihood Estimation with Multi-Modal Large Language Models for Text-Video Retrieval (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper