Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -60,12 +60,11 @@ dataset_summary: '
|
|
60 |
'
|
61 |
---
|
62 |
|
63 |
-
# Dataset Card for AVM_Segmentation
|
64 |
-
|
65 |
-
<!-- Provide a quick summary of the dataset. -->
|
66 |
-
|
67 |
|
|
|
68 |
|
|
|
69 |
|
70 |
|
71 |
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 6763 samples.
|
@@ -97,130 +96,195 @@ session = fo.launch_app(dataset)
|
|
97 |
|
98 |
### Dataset Description
|
99 |
|
100 |
-
|
101 |
|
|
|
102 |
|
|
|
103 |
|
104 |
-
|
105 |
-
- **Funded by [optional]:** [More Information Needed]
|
106 |
-
- **Shared by [optional]:** [More Information Needed]
|
107 |
-
- **Language(s) (NLP):** en
|
108 |
-
- **License:** [More Information Needed]
|
109 |
|
110 |
-
### Dataset Sources
|
111 |
|
112 |
-
|
113 |
|
114 |
-
- **Repository:** [More Information Needed]
|
115 |
-
- **Paper [optional]:** [More Information Needed]
|
116 |
-
- **Demo [optional]:** [More Information Needed]
|
117 |
|
118 |
## Uses
|
119 |
|
120 |
-
<!-- Address questions around how the dataset is intended to be used. -->
|
121 |
-
|
122 |
### Direct Use
|
123 |
|
124 |
-
|
125 |
-
|
126 |
-
|
|
|
|
|
|
|
127 |
|
128 |
### Out-of-Scope Use
|
129 |
|
130 |
-
|
131 |
-
|
132 |
-
|
|
|
|
|
133 |
|
134 |
## Dataset Structure
|
135 |
|
136 |
-
|
137 |
-
|
138 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
139 |
|
140 |
## Dataset Creation
|
141 |
|
142 |
### Curation Rationale
|
143 |
|
144 |
-
|
145 |
-
|
146 |
-
[More Information Needed]
|
147 |
|
148 |
### Source Data
|
149 |
|
150 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
151 |
-
|
152 |
#### Data Collection and Processing
|
153 |
|
154 |
-
|
155 |
-
|
156 |
-
|
|
|
|
|
157 |
|
158 |
#### Who are the source data producers?
|
159 |
|
160 |
-
|
161 |
-
|
162 |
-
[More Information Needed]
|
163 |
-
|
164 |
-
### Annotations [optional]
|
165 |
|
166 |
-
|
167 |
|
168 |
-
#### Annotation
|
169 |
-
|
170 |
-
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
171 |
-
|
172 |
-
[More Information Needed]
|
173 |
|
|
|
|
|
|
|
|
|
174 |
#### Who are the annotators?
|
175 |
|
176 |
-
|
177 |
-
|
178 |
-
[More Information Needed]
|
179 |
|
180 |
-
|
181 |
|
182 |
-
|
183 |
-
|
184 |
-
|
|
|
|
|
185 |
|
186 |
## Bias, Risks, and Limitations
|
187 |
|
188 |
-
|
189 |
|
190 |
-
|
|
|
|
|
|
|
|
|
191 |
|
192 |
-
###
|
193 |
|
194 |
-
|
|
|
|
|
195 |
|
196 |
-
|
197 |
|
198 |
-
|
|
|
|
|
|
|
199 |
|
200 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
201 |
|
202 |
-
|
|
|
|
|
|
|
203 |
|
204 |
-
|
|
|
|
|
205 |
|
206 |
-
|
|
|
|
|
207 |
|
208 |
-
|
209 |
|
210 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
211 |
|
212 |
-
|
|
|
213 |
|
214 |
-
|
215 |
|
216 |
-
|
|
|
|
|
|
|
217 |
|
218 |
-
|
|
|
|
|
|
|
|
|
|
|
219 |
|
220 |
-
## Dataset Card Authors
|
221 |
|
222 |
-
|
|
|
223 |
|
224 |
## Dataset Card Contact
|
225 |
|
226 |
-
[
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
'
|
61 |
---
|
62 |
|
63 |
+
# Dataset Card for AVM_Segmentation# Dataset Card for AVM (Around View Monitoring) Semantic Segmentation Dataset
|
|
|
|
|
|
|
64 |
|
65 |
+

|
66 |
|
67 |
+
This repository provides a FiftyOne-compatible version of the AVM semantic segmentation dataset for autonomous parking systems, with enhanced metadata and visualization capabilities.
|
68 |
|
69 |
|
70 |
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 6763 samples.
|
|
|
96 |
|
97 |
### Dataset Description
|
98 |
|
99 |
+
The AVM dataset is a specialized computer vision dataset designed for training semantic segmentation models for autonomous parking systems. It contains bird's-eye view images from around-view monitoring cameras with pixel-level annotations for parking space detection and obstacle avoidance.
|
100 |
|
101 |
+
* **Curated by:** Chulhoon Jang and team at [original repository](https://github.com/ChulhoonJang/avm_dataset)
|
102 |
|
103 |
+
* **FiftyOne Integration by:** Harpreet Sahota (Voxel51)
|
104 |
|
105 |
+
* **License:** Please refer to the [original dataset repository](https://github.com/ChulhoonJang/avm_dataset) for license information (which currently has no License)
|
|
|
|
|
|
|
|
|
106 |
|
107 |
+
### Dataset Sources
|
108 |
|
109 |
+
* **Original Repository:** [https://github.com/ChulhoonJang/avm_dataset](https://github.com/ChulhoonJang/avm_dataset)
|
110 |
|
|
|
|
|
|
|
111 |
|
112 |
## Uses
|
113 |
|
|
|
|
|
114 |
### Direct Use
|
115 |
|
116 |
+
This dataset is designed for:
|
117 |
+
- **Autonomous Parking Systems**: Training models to detect and navigate into parking spaces
|
118 |
+
- **Semantic Segmentation Research**: Benchmarking segmentation algorithms on fisheye/bird's-eye view images
|
119 |
+
- **Parking Space Detection**: Identifying available vs occupied parking spots
|
120 |
+
- **Obstacle Detection**: Recognizing curbs, pillars, walls, and other vehicles
|
121 |
+
- **360° Surround View Systems**: Enhancing camera-based parking assistance features
|
122 |
|
123 |
### Out-of-Scope Use
|
124 |
|
125 |
+
This dataset should NOT be used for:
|
126 |
+
- Forward-facing autonomous driving (it's specifically bird's-eye view)
|
127 |
+
- General object detection (annotations are polygon-based for segmentation)
|
128 |
+
- High-speed navigation (designed for low-speed parking scenarios)
|
129 |
+
- Pedestrian detection (pedestrians are not annotated)
|
130 |
|
131 |
## Dataset Structure
|
132 |
|
133 |
+
### Overview
|
134 |
+
- **Total Images**: 6,763 (320 x 160 pixels)
|
135 |
+
- **Training Set**: 4,057 images
|
136 |
+
- **Test Set**: 2,706 images
|
137 |
+
- **Outdoor Images**: 3,614
|
138 |
+
- **Indoor Images**: 3,149
|
139 |
+
|
140 |
+
### Semantic Classes
|
141 |
+
The dataset contains 5 semantic classes with specific RGB color mappings:
|
142 |
+
|
143 |
+
| Class | Description | RGB Color | Hex Color |
|
144 |
+
|-------|------------|-----------|-----------|
|
145 |
+
| 0 | Free Space (drivable area) | [0, 0, 255] | #0000FF (Blue) |
|
146 |
+
| 1 | Marker (parking lines) | [255, 255, 255] | #FFFFFF (White) |
|
147 |
+
| 2 | Vehicle (other cars) | [255, 0, 0] | #FF0000 (Red) |
|
148 |
+
| 3 | Other (curbs, pillars, walls) | [0, 255, 0] | #00FF00 (Green) |
|
149 |
+
| 4 | Ego Vehicle (camera car) | [0, 0, 0] | #000000 (Black) |
|
150 |
+
|
151 |
+
### FiftyOne Fields
|
152 |
+
|
153 |
+
When parsed into FiftyOne, each sample includes:
|
154 |
+
|
155 |
+
| Field | Type | Description |
|
156 |
+
|-------|------|-------------|
|
157 |
+
| `filepath` | string | Path to the image file |
|
158 |
+
| `split` | string | "train" or "test" |
|
159 |
+
| `sample_id` | int | Unique identifier from filename |
|
160 |
+
| `environment` | Classification | "indoor" or "outdoor" (heuristic based on curb presence) |
|
161 |
+
| `parking_type` | Classification | "perpendicular" or "parallel" |
|
162 |
+
| `slot_type` | Classification | "closed", "opened", or "no_marker" |
|
163 |
+
| `polygon_annotations` | Polylines | Normalized polygon coordinates for each object |
|
164 |
+
| `ground_truth` | Segmentation | Pixel-level segmentation mask |
|
165 |
+
| `classes_present` | list | Classes present in the image |
|
166 |
+
| `num_markers` | int | Count of parking marker polygons |
|
167 |
+
| `num_vehicles` | int | Count of vehicle polygons |
|
168 |
+
| `has_curb` | bool | Whether curb is present |
|
169 |
+
| `has_ego_vehicle` | bool | Whether ego vehicle is annotated |
|
170 |
|
171 |
## Dataset Creation
|
172 |
|
173 |
### Curation Rationale
|
174 |
|
175 |
+
The dataset was created to address the lack of bird's-eye view datasets for autonomous parking systems. Most existing datasets focus on forward-facing cameras, but parking assistance requires a top-down perspective to accurately detect parking spaces and navigate safely.
|
|
|
|
|
176 |
|
177 |
### Source Data
|
178 |
|
|
|
|
|
179 |
#### Data Collection and Processing
|
180 |
|
181 |
+
- **Camera Setup**: Around View Monitoring (AVM) system with fisheye cameras
|
182 |
+
- **View Angle**: Bird's-eye view (top-down perspective)
|
183 |
+
- **Resolution**: 320 x 160 pixels (optimized for embedded systems)
|
184 |
+
- **Environments**: Real parking lots (both indoor parking garages and outdoor lots)
|
185 |
+
- **Conditions**: Various lighting conditions, weather (sunny, cloudy, rainy)
|
186 |
|
187 |
#### Who are the source data producers?
|
188 |
|
189 |
+
The original dataset was produced by researchers developing autonomous parking systems, likely in an academic or industrial research setting.
|
|
|
|
|
|
|
|
|
190 |
|
191 |
+
### Annotations
|
192 |
|
193 |
+
#### Annotation Process
|
|
|
|
|
|
|
|
|
194 |
|
195 |
+
1. **Polygon Annotation**: Each object is annotated with precise polygon boundaries in YAML format
|
196 |
+
2. **Semantic Masks**: Ground truth masks are generated from polygon annotations
|
197 |
+
3. **Multi-polygon Support**: Multiple instances of the same class are supported (e.g., multiple vehicles)
|
198 |
+
4. **Coordinate System**: Polygons use image coordinates (0-319 x 0-159)
|
199 |
#### Who are the annotators?
|
200 |
|
201 |
+
Information about specific annotators is not provided in the original dataset documentation.
|
|
|
|
|
202 |
|
203 |
+
## Personal and Sensitive Information
|
204 |
|
205 |
+
The dataset contains images from parking lots but does not include:
|
206 |
+
- License plate information (resolution too low)
|
207 |
+
- Personally identifiable information
|
208 |
+
- Pedestrian annotations
|
209 |
+
- Location-specific information
|
210 |
|
211 |
## Bias, Risks, and Limitations
|
212 |
|
213 |
+
### Known Limitations
|
214 |
|
215 |
+
1. **Limited Resolution**: 320x160 pixels may not capture fine details
|
216 |
+
2. **Geographic Bias**: Dataset may be from specific geographic regions
|
217 |
+
3. **Weather Conditions**: Limited representation of extreme weather
|
218 |
+
4. **Vehicle Types**: May not include all vehicle types (trucks, motorcycles, etc.)
|
219 |
+
5. **Parking Styles**: Primarily perpendicular and parallel parking
|
220 |
|
221 |
+
### Technical Challenges
|
222 |
|
223 |
+
- **Indoor Reflections**: Reflected lights can be mistaken for parking markers
|
224 |
+
- **Fisheye Distortion**: Bird's-eye view introduces geometric distortions
|
225 |
+
- **Class Imbalance**: Some classes (like curbs) appear less frequently
|
226 |
|
227 |
+
## Recommendations
|
228 |
|
229 |
+
1. **Augmentation**: Apply data augmentation to improve model robustness
|
230 |
+
2. **Validation**: Test models on diverse parking environments not in the dataset
|
231 |
+
3. **Resolution**: Consider upscaling techniques if higher resolution is needed
|
232 |
+
4. **Edge Cases**: Be aware that the dataset may not cover all parking scenarios
|
233 |
|
|
|
234 |
|
235 |
+
### Exploring the Dataset
|
236 |
+
```python
|
237 |
+
# View class distribution
|
238 |
+
print(dataset.count_values("classes_present"))
|
239 |
|
240 |
+
# Filter indoor vs outdoor
|
241 |
+
indoor = dataset.match(F("environment.label") == "indoor")
|
242 |
+
outdoor = dataset.match(F("environment.label") == "outdoor")
|
243 |
|
244 |
+
# Samples with multiple vehicles
|
245 |
+
multi_vehicle = dataset.match(F("num_vehicles") > 2)
|
246 |
+
```
|
247 |
|
248 |
+
## Citation
|
249 |
|
250 |
+
### BibTeX
|
251 |
+
```bibtex
|
252 |
+
@dataset{avm_dataset,
|
253 |
+
title={AVM (Around View Monitoring) System Datasets for Auto Parking},
|
254 |
+
author={Chulhoon Jang and others},
|
255 |
+
year={2020},
|
256 |
+
url={https://github.com/ChulhoonJang/avm_dataset}
|
257 |
+
}
|
258 |
+
```
|
259 |
|
260 |
+
### APA
|
261 |
+
Jang, C., et al. (2020). AVM (Around View Monitoring) System Datasets for Auto Parking. GitHub. https://github.com/ChulhoonJang/avm_dataset
|
262 |
|
263 |
+
## More Information
|
264 |
|
265 |
+
### Related Resources
|
266 |
+
- [Original Dataset Repository](https://github.com/ChulhoonJang/avm_dataset)
|
267 |
+
- [FiftyOne Documentation](https://docs.voxel51.com)
|
268 |
+
- Implementation code for semantic segmentation models (link in original repo)
|
269 |
|
270 |
+
### Dataset Statistics
|
271 |
+
- Average polygons per class:
|
272 |
+
- Ego vehicle: 1.0 polygons (fixed position)
|
273 |
+
- Markers: 2.6 polygons per image
|
274 |
+
- Vehicles: 2.1 polygons per image
|
275 |
+
- Curbs: 1.4 polygons per image (when present)
|
276 |
|
277 |
+
## Dataset Card Authors
|
278 |
|
279 |
+
- **FiftyOne Integration**: Harpreet Sahota (Voxel51)
|
280 |
+
- **Original Dataset**: Chulhoon Jang and team
|
281 |
|
282 |
## Dataset Card Contact
|
283 |
|
284 |
+
- **Original dataset**: See [original repository](https://github.com/ChulhoonJang/avm_dataset)
|
285 |
+
|
286 |
+
---
|
287 |
+
|
288 |
+
## Acknowledgments
|
289 |
+
|
290 |
+
Thanks to the original dataset creators for making this valuable resource available to the research community. The FiftyOne integration enhances the dataset's usability for modern computer vision workflows.
|