SPHERE-VLM / README.md
wei2912's picture
docs(README): update main link
dd1dfdb verified
|
raw
history blame
2.44 kB
metadata
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - image
  - text
  - vlm
  - spatial-perception
  - spatial-reasoning
annotations_creators:
  - expert-generated
pretty_name: SPHERE
size_categories:
  - 1K<n<10K
source_datasets:
  - MS COCO-2017
configs:
  - config_name: distance_and_counting
    data_files: coco_test2017_annotations/combine_2_skill/distance_and_counting.json
  - config_name: distance_and_size
    data_files: coco_test2017_annotations/combine_2_skill/distance_and_size.json
  - config_name: position_and_counting
    data_files: coco_test2017_annotations/combine_2_skill/position_and_counting.json
  - config_name: object_manipulation
    data_files: coco_test2017_annotations/reasoning/object_manipulation.json
  - config_name: object_manipulation_w_intermediate
    data_files: >-
      coco_test2017_annotations/reasoning/object_manipulation_w_intermediate.json
  - config_name: object_occlusion
    data_files: coco_test2017_annotations/reasoning/object_occlusion.json
  - config_name: object_occlusion_w_intermediate
    data_files: coco_test2017_annotations/reasoning/object_occlusion_w_intermediate.json
  - config_name: counting_only-paired-distance_and_counting
    data_files: >-
      coco_test2017_annotations/single_skill/counting_only-paired-distance_and_counting.json
  - config_name: counting_only-paired-position_and_counting
    data-files: >-
      coco_test2017_annotations/single_skill/counting_only-paired-position_and_counting.json
  - config_name: distance_only
    data_files: coco_test2017_annotations/single_skill/distance_only.json
  - config_name: position_only
    data_files: coco_test2017_annotations/single_skill/position_only.json
  - config_name: distance_only
    data_files: coco_test2017_annotations/single_skill/size_only.json

SPHERE (Spatial Perception and Hierarchical Evaluation of REasoning) is a benchmark for assessing spatial reasoning in vision-language models. It introduces a hierarchical evaluation framework with a human-annotated dataset, testing models on tasks ranging from basic spatial understanding to complex multi-skill reasoning. SPHERE poses significant challenges for both state-of-the-art open-source and proprietary models, revealing critical gaps in spatial cognition.

Usage

This dataset is designed to be used on top of MS COCO-2017. For more details, please see https://github.com/zwenyu/SPHERE-VLM?tab=readme-ov-file#data.