File size: 7,494 Bytes
10a4acf 4843533 10a4acf 4843533 10a4acf eeba5f1 10a4acf 4843533 8f87cad 10a4acf 8f87cad 10a4acf 8f87cad 10a4acf 8f87cad 46f4f70 10a4acf 4843533 8f87cad 46f4f70 10a4acf 8f87cad 10a4acf 8f87cad 10a4acf 4843533 8f87cad 10a4acf c0f6b3f 4861b51 c0f6b3f 4861b51 c0f6b3f 4861b51 c0f6b3f 4861b51 c0f6b3f 4861b51 c0f6b3f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 |
---
dataset_info:
features:
- name: vclip_id
dtype: string
- name: question_id
dtype: int32
- name: question
dtype: string
- name: answer
dtype: string
- name: frame_indexes
sequence: int32
- name: choices
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: video_metadata
struct:
- name: CLIP-reference-interval
sequence: float64
- name: bitrate
dtype: int64
- name: codec
dtype: string
- name: frame_dimensions
sequence: int64
- name: frame_dimensions_resized
sequence: int64
- name: frame_rate
dtype: float64
- name: resolution
dtype: string
- name: resolution_resized
dtype: string
- name: vclip_duration
dtype: float64
- name: vclip_frame_count
dtype: int64
- name: video_duration
dtype: float64
- name: video_frame_count
dtype: int64
- name: video_id
dtype: string
splits:
- name: train
num_bytes: 4782472
num_examples: 11218
- name: test
num_bytes: 1776278
num_examples: 3874
download_size: 1999818
dataset_size: 6558750
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
<h1 align='center' style="text-align:center; font-weight:bold; font-size:2.0em;letter-spacing:2.0px;">
LV-Haystack: Temporal Search in Long-Form Video Understanding</h1>
<p align='center' style="text-align:center;font-size:1.25em;">
<a href="https://jhuiye.com/" target="_blank">Jinhui Ye<sup>1</sup></a>,
<a href="https://zihanwang314.github.io/" target="_blank">Zihan Wang<sup>2</sup></a>,
<a href="https://haosensun.github.io/" target="_blank">Haosen Sun<sup>2</sup></a>,
<a href="https://keshik6.github.io/" target="_blank">Keshigeyan Chandrasegaran<sup>1</sup></a>,
<a href="https://zanedurante.github.io/" target="_blank">Zane Durante<sup>1</sup></a>, <br/>
<a href="https://ceyzaguirre4.github.io/" target="_blank">Cristobal Eyzaguirre<sup>1</sup></a>,
<a href="https://talkingtorobots.com/yonatanbisk.html" target="_blank">Yonatan Bisk<sup>3</sup></a>,
<a href="https://www.niebles.net/" target="_blank">Juan Carlos Niebles<sup>1</sup></a>,
<a href="https://profiles.stanford.edu/ehsan-adeli" target="_blank">Ehsan Adeli<sup>1</sup></a>,
<a href="https://profiles.stanford.edu/fei-fei-li/" target="_blank">Li Fei-Fei<sup>1</sup></a>,
<a href="https://jiajunwu.com/" target="_blank">Jiajun Wu<sup>1</sup></a>,
<a href="https://limanling.github.io/" target="_blank">Manling Li<sup>2</sup></a><br/>
Stanford University<sup>1</sup>, Northwestern University<sup>2</sup>, Carnegie Mellon University<sup>3</sup><br/>
<em>Conference on AI Research, 2025</em><br/>
<a href="https://examplewebsite.com" title="Website" target="_blank" rel="nofollow" style="text-decoration: none;">🌎Website</a> |
<a href="https://examplecode.com" title="Dataset" target="_blank" rel="nofollow" style="text-decoration: none;">🧑💻Code</a> |
<a href="https://arxiv.org/examplepaper" title="aXiv" target="_blank" rel="nofollow" style="text-decoration: none;">📄arXiv</a> |
<a href="https://exampleleaderboard.com" title="Leaderboard" target="_blank" rel="nofollow" style="text-decoration: none;">🏆 Leaderboard (Coming Soon)</a><br>
</p>
<img src="assets/img/logo.png" alt="Logo" width="400" height="auto" style="display:block; margin:auto;" />
<p align='center' style="text-align:center;font-size:1.25em;">
<a align='center' style="text-decoration: none; color: gray">
Dataset is part of the <a href="">T* project</a></p>
<p align=center>
NOTE: Does Manling need Stanford Affiliation? <br>
NOTE: Fill in website url etc
</p>
## News
- **1/1/2025: Thrilled to announce T\* and LV-Haystack!**
## Dataset Sample
```python
{
'vclip_id': '6338b73e-393f-4d37-b278-68703b45908c',
'question_id': 10,
'question': 'What nail did I pull out?',
'answer': 'E',
'frame_indexes': [5036, 5232],
'choices': {
'A': 'The nail from the front wheel fender',
'B': 'The nail from the motorcycle battery compartment',
'C': 'The nail from the left side of the motorcycle seat',
'D': 'The nail from the rearview mirror mount',
'E': 'The nail on the right side of the motorcycle exhaust pipe'
},
'video_metadata': {
'CLIP-reference-interval': [180.0, 240.0], # Time interval of the video clip
'frame_count': 14155, # Total number of frames in the video
'frame_rate': 30.0, # Frame rate of the video
'duration': 471.8333435058594, # Duration of the video in seconds
'resolution': '454x256', # Original resolution of the video
'frame_dimensions': None, # Frame dimensions (if available)
'codec': 'N/A', # Codec used for the video (not available here)
'bitrate': 0, # Bitrate of the video
'frame_dimensions_resized': [340, 256], # Resized frame dimensions
'resolution_resized': '340x256', # Resized resolution
'video_id': 'b6ae365a-dd70-42c4-90d6-e0351778d991' # Unique video identifier
}
}
```
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("LVHaystack/LongVideoHaystack")
print(dataset)
```
```bash
>>> DatasetDict({
train: Dataset({
features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
num_rows: 11218
})
test: Dataset({
features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
num_rows: 3874
})
})
```
## Abstract
[[ABSTRACT]]
## [[TITLE]] Statistics
<img src="[[STATISTICS_IMAGE_LINK]]" alt="image description" width="850" height="200">
## Dataset Organization
The dataset is organized to facilitate easy access to all resources. Below is the structure:
```
[[DATASET_ORGANIZATION_STRUCTURE]]
```
### Description of Key Components
```[[KEY_COMPONENT_PATH]]```: This directory contains resources in [[FORMAT]] format. Each file includes metadata and other details:
- ```[[DATA_FILE_1]]```:
- [[DESCRIPTION_1]].
- ```[[DATA_FILE_2]]```:
- [[DESCRIPTION_2]].
- ```[[DATA_FILE_3]]```:
- [[DESCRIPTION_3]].
### Annotation Format
Each entry includes metadata in the following format:
```
{
"[[FIELD_1]]": {
"[[METADATA_FIELD_1]]": {
"[[DETAIL_1]]": [[DETAIL_TYPE_1]],
"[[DETAIL_2]]": [[DETAIL_TYPE_2]],
},
"[[BENCHMARK_FIELD]]": [
{
"[[QUESTION_FIELD]]": [[QUESTION_TYPE]],
"[[TASK_FIELD]]": [[TASK_TYPE]],
"[[LABEL_FIELD]]": [[LABEL_TYPE]],
"[[TIMESTAMP_FIELD]]": [[TIMESTAMP_TYPE]],
"[[MCQ_FIELD]]": "[[MCQ_OPTIONS]]",
"[[ANSWER_FIELD_1]]": [[ANSWER_TYPE_1]],
"[[ANSWER_FIELD_2]]": [[ANSWER_TYPE_2]],
"[[ANSWER_FIELD_3]]": [[ANSWER_TYPE_3]],
"[[ANSWER_FIELD_4]]": [[ANSWER_TYPE_4]],
"[[ANSWER_FIELD_5]]": [[ANSWER_TYPE_5]]
},
// Next question
]
},
// Next entry
}
```
## Limitations
[[LIMITATIONS]]
## Contact
- [[CONTACT_1]]
- [[CONTACT_2]]
- [[CONTACT_3]]
## Citation
```bibtex
[[BIBTEX]]
```
|