--- license: mit task_categories: - image-classification - object-detection - visual-question-answering - zero-shot-image-classification language: - en tags: - ego4d - egocentric-vision - computer-vision - random-sampling - video-frames - first-person-view - activity-recognition size_categories: - 10K= 2: # Just show first few batches break ``` ### Data Analysis ```python import pandas as pd from collections import Counter # Convert to pandas for analysis data = [] for sample in dataset['train']: data.append({ 'video_uid': sample['video_uid'], 'timestamp_sec': sample['timestamp_sec'], 'fps': sample['fps'], 'total_frames': sample['total_frames'], 'worker_id': sample['worker_id'] }) df = pd.DataFrame(data) # Basic statistics print(f"Unique videos: {df['video_uid'].nunique()}") print(f"Average FPS: {df['fps'].mean():.2f}") print(f"Timestamp range: {df['timestamp_sec'].min():.2f}s - {df['timestamp_sec'].max():.2f}s") # Video distribution video_counts = Counter(df['video_uid']) print(f"Samples per video - Min: {min(video_counts.values())}, Max: {max(video_counts.values())}") ``` ## Applications This dataset is suitable for: - **Egocentric vision research**: First-person view understanding - **Activity recognition**: Daily activity classification - **Object detection**: Objects in natural settings - **Scene understanding**: Indoor/outdoor scene analysis - **Transfer learning**: Pre-training for egocentric tasks - **Multi-modal learning**: Combining with video metadata - **Temporal analysis**: Using timestamp information ## Generation Statistics - **Target Frames**: 20,000 - **Generated Frames**: 20,000 - **Success Rate**: 100.0% - **Generation Time**: 13.3 minutes - **Workers Used**: 128 - **Processing Speed**: 25.08 frames/second - **Source Videos**: 52,665+ Ego4D video files - **Diversity**: Maximum diversity through distributed sampling ## Technical Details ### Sampling Strategy - **Random Selection**: Both video and frame positions randomly sampled - **Worker Distribution**: Videos distributed across 128 workers for diversity - **Quality Control**: Automatic validation and error recovery - **Metadata Preservation**: Complete provenance tracking ### Data Quality - **Image Quality**: All frames validated during generation - **Resolution**: Consistent 1024×1024 PNG format - **Color Space**: RGB color space - **Compression**: PNG lossless compression - **Metadata Completeness**: 100% metadata coverage ## Citation If you use this dataset, please cite the original Ego4D paper: ```bibtex @inproceedings{grauman2022ego4d, title={Ego4d: Around the world in 3,000 hours of egocentric video}, author={Grauman, Kristen and Westbury, Andrew and Byrnes, Eugene and others}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={18211--18230}, year={2022} } ``` ## License This dataset follows the same license terms as the original Ego4D dataset. Please refer to the [Ego4D license](https://ego4d-data.org/pdfs/Ego4D-License.pdf) for usage terms. ## Dataset Creation This dataset was generated using a high-performance multi-process sampling system designed for maximum diversity and efficiency. The generation process: 1. **Video Indexing**: Scanned 52,665+ Ego4D video files 2. **Distributed Sampling**: Used 128 parallel workers for maximum diversity 3. **Quality Assurance**: Validated each frame during generation 4. **Metadata Collection**: Captured complete provenance information 5. **Efficient Upload**: Used HuggingFace datasets library with parquet format For more details on the generation process, see the [technical documentation](https://github.com/your-repo/ego4d-random-sampling).