Datasets:

Modalities:
Tabular
Text
Formats:
json
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
ImplicitQA / README.md
nielsr's picture
nielsr HF Staff
Populate ImplicitQA dataset card content and links
9209ed9 verified
|
raw
history blame
1.84 kB
metadata
language:
  - en
license: mit
size_categories:
  - 1K<n<10K
task_categories:
  - video-text-to-text
pretty_name: ImplicitQA
configs:
  - config_name: default
    data_files:
      - split: eval
        path: ImplicitQAv0.1.2.jsonl
tags:
  - implicit-reasoning

ImplicitQA Dataset

The ImplicitQA dataset was introduced in the paper ImplicitQA: Going beyond frames towards Implicit Video Reasoning.

Project page: https://implicitqa.github.io/

ImplicitQA is a novel benchmark specifically designed to test models on implicit reasoning in Video Question Answering (VideoQA). Unlike existing VideoQA benchmarks that primarily focus on questions answerable through explicit visual content (actions, objects, events directly observable within individual frames or short clips), ImplicitQA addresses the need for models to infer motives, causality, and relationships across discontinuous frames. This mirrors human-like understanding of creative and cinematic videos, which often employ storytelling techniques that deliberately omit certain depictions.

The dataset comprises 1,000 meticulously annotated QA pairs derived from over 320 high-quality creative video clips. These QA pairs are systematically categorized into key reasoning dimensions, including:

  • Lateral and vertical spatial reasoning
  • Depth and proximity
  • Viewpoint and visibility
  • Motion and trajectory
  • Causal and motivational reasoning
  • Social interactions
  • Physical context
  • Inferred counting

The annotations are deliberately challenging, crafted to ensure high quality and to highlight the difficulty of implicit reasoning for current VideoQA models. By releasing both the dataset and its data collection framework, the authors aim to stimulate further research and development in this crucial area of AI.