Datasets:
Dataset Card for KABR Worked Examples
This dataset is comprised of manually annotated bounding box detections, mini-scenes, behavior annotations, and associated telemetry for three drone video sessions that were used for kabr-tools case studies. Drone video was collected at Mpala Research Centre in January 2023; please see the full video dataset for more information on original video context.
Dataset Details
Annotations were created to evaluate the kabr-tools pipeline and conduct case studies on Grevy's landscape of fear and inter-species spatial distribution. Annotations include manual detections and tracks, mini-scenes cut from source videos, behavior annotations from an X3D action recognition model, and associated drone telemetry data. The detections contain bounding box coordinates, image file names, and class labels for each annotated animal. Annotations were created using CVAT to manually draw bounding boxes around animals in a selection of raw drone videos. The annotations were then exported as xml files and used to create the provided mini-scenes. The KABR X3D model was used to label the mini-scenes with predicted behaviors. Telemetry data was exported from Airdata.
Session Summary
Session | Date Collected | Demographic Information and Habitat | Video File IDs in Session | Session Source Videos (link) |
---|---|---|---|---|
ex-1 |
2023-01-18 | 2 Adult male Grevy's zebras in an open plain | DJI_0068 , DJI_0069 , DJI_0070 , DJI_0071 |
imageomics/KABR-raw-videos/18_01_2023_session_7/ |
ex-2 |
2023-01-20 | 5 Grevy's zebras in a semi-open habitat along a roadway | DJI_0142 , DJI_0143 , DJI_0144 , DJI_0145 , DJI_0146 , DJI_0147 |
imageomics/KABR-raw-videos/20_01_2023_session_3/ |
ex-3 |
2023-01-21 | Mixed herd of 3 reticulated giraffes, 2 plains zebras and 11 Grevy's zebras in a closed habitat with dense vegetation near Mo Kenya | DJI_0206 , DJI_0208 , DJI_0210 , DJI_0211 |
imageomics/KABR-raw-videos/21_01_2023_session_5/ |
Note: Session numbers (as used in identifiers) are based on all KABR video sessions, while we focus in this dataset on Sessions 7, 5, and 3, which we label as Sessions ex-1, ex-2, and ex-3, respectively.
Dataset Structure
├── behavior/
│ ├── 18_01_2023_session_7-DJI_0068.csv
│ ├── 18_01_2023_session_7-DJI_0069.csv
│ ├── ...
│ ├── 21_01_2023_session_5-DJI_0211.csv
│ └── 21_01_2023_session_5-DJI_0212.csv
├── detections/
│ ├── 18_01_2023_session_7-DJI_0068.xml
│ ├── 18_01_2023_session_7-DJI_0069.xml
│ ├── ...
│ ├── 21_01_2023_session_5-DJI_0211.xml
│ └── 21_01_2023_session_5-DJI_0212.xml
├── mini_scenes/
│ ├── 18_01_2023_session_7-DJI_0068/
│ │ ├── 0.mp4
│ │ ├── 1.mp4
│ │ └── metadata/
│ │ ├── DJI_0068.jpg
│ │ ├── DJI_0068_metadata.json
│ │ └── DJI_0068_tracks.xml
│ ├── 18_01_2023_session_7-DJI_0069/
│ │ ├── 0.mp4
│ │ ├── 1.mp4
│ │ └── metadata/
│ │ ├── DJI_0069.jpg
│ │ ├── DJI_0069_metadata.json
│ │ └── DJI_0069_tracks.xml
│ ├── ...
│ ├── 21_01_2023_session_5-DJI_0211/
│ │ ├── 0.mp4
│ │ ├── ...
│ │ ├── 33.mp4
│ │ └── metadata/
│ │ ├── DJI_0211.jpg
│ │ ├── DJI_0211_metadata.json
│ │ └── DJI_0211_tracks.xml
│ └── 21_01_2023_session_5-DJI_0212/
│ ├── 0.mp4
│ ├── 10.mp4
│ ├── 11.mp4
│ ├── 12.mp4
│ ├── 13.mp4
│ ├── 14.mp4
│ ├── 1.mp4
│ ├── 2.mp4
│ ├── 3.mp4
│ ├── 4.mp4
│ ├── 5.mp4
│ ├── 6.mp4
│ ├── 7.mp4
│ ├── 8.mp4
│ ├── 9.mp4
│ └── metadata/
│ ├── DJI_0212.jpg
│ ├── DJI_0212_metadata.json
│ └── DJI_0212_tracks.xml
├── README.md
└── telemetry/
├── 18_01_2023-session_7-Flight_Airdata.csv
├── 20_01_2023-session_3-Flight_Airdata.csv
└── 21_01_2023-session_5-Flight_Airdata.csv
Note: Each video has an associated video_id
, which is defined as <DD>_01_2023_session_<session-number>-DJI_<video-number>
(ex: 21_01_2023_session_5-DJI_0212
). This ID is used to identify and link all (meta)data associated with that video.
What each file/folder is for
Path / Pattern | Purpose |
---|---|
behavior/<video_id>.csv |
Per-video roll-ups of X3D behavior predictions. CSV containing one row per mini-scene clip with label + references (video, track, frame). |
detections/<video_id>.xml |
Manual detections/tracks per source video (CVAT “tracks” XML). One <track> per animal across frames; used to cut mini-scenes. |
mini_scenes/<video_id>/DJI_XXXX.mp4 |
The source video referenced by detections for that <video_id> . |
mini_scenes/<video_id>/<k>.mp4 |
Mini-scenes (short clips) cut from the source video based on detection tracks (0.mp4 , 1.mp4 , …). |
mini_scenes/<video_id>/metadata/DJI_XXXX_tracks.xml |
Copy of the CVAT tracks used to generate the mini-scenes (provenance). |
mini_scenes/<video_id>/metadata/DJI_XXXX_metadata.json |
Video-level metadata (session/date, FPS, resolution, timing, etc.). |
mini_scenes/<video_id>/metadata/DJI_XXXX.jpg |
Thumbnail/keyframe for quick preview. |
mini_scenes/<video_id>/actions/ |
Per-clip auto behavior labels from the X3D action model (CSV or JSON; presence varies by video). |
telemetry/<DD>_01_2023-session_<session-number>-Flight_Airdata.csv |
Drone flight logs (Airdata export) for the corresponding sessions (timing, altitude, battery, etc.). |
README.md |
Repository-level notes and usage tips. |
Data instances
- Detection instance (XML): one
<track>
spans all frames of a video; each<box>
is a frame-level bounding box with coordinates and flags. - Mini-scene instance (MP4): a short clip indexed by file name (
k.mp4
) undermini_scenes/<video_id>/
. - Behavior instance (CSV row): one mini-scene with X3D-predicted behavior and references to the clip (plus optional confidence/timing).
- Telemetry instance (CSV row): one flight-log record from Airdata with timestamped vehicle context.
Data fields
A. Detections (CVAT “tracks” XML)
Element / Attribute | Type | Example | Meaning |
---|---|---|---|
/annotations/version |
string | 1.1 |
Annotation file (XML) version. |
/annotations/track@id |
integer | 0 |
Unique id for a tracked object within the video. |
/annotations/track@label |
string | Grevy |
Class/species label. |
/annotations/track@source |
string | manual |
How the annotation was created. These are all manual . |
/annotations/track/box@frame |
int (0-based) | 0,1,2,… |
Frame index. |
/annotations/track/box@outside |
enum {0 ,1 } |
0 |
0 present; 1 not visible. |
/annotations/track/box@occluded |
enum {0 ,1 } |
0 |
Occlusion flag (1 indicates the subject is occluded). |
/annotations/track/box@keyframe |
enum {0 ,1 } |
1 |
Keyframe marker. Every 10th frame is considered a "keyframe" (CVAT default setting). |
/annotations/track/box@xtl |
float (px) | 2342.00 |
X coordinate of top-left corner. |
/annotations/track/box@ytl |
float (px) | 2427.00 |
Y coordinate of top-left corner. |
/annotations/track/box@xbr |
float (px) | 2530.00 |
X coordinate of bottom-right corner. |
/annotations/track/box@ybr |
float (px) | 2623.00 |
Y coordinate of bottom-right corner. |
/annotations/track/box@z_order |
integer | 0 |
Drawing order. |
B. Behavior CSV (auto labels; one file per source video)
Note: Column names may vary slightly by export; use the header in each CSV as ground truth.
Column (typical) | Example | Meaning |
---|---|---|
clip_path or clip_id |
mini_scenes/21_01_2023_session_5-DJI_0208/33.mp4 |
Relative path to the mini-scene clip. |
source_video |
DJI_0208.mp4 |
Name of the parent/source video. |
video_id |
21_01_2023_session_5-DJI_0208 |
Folder/video identifier. This ID is used to identify and link all (meta)data associated with that source video. |
clip_index |
33 |
Index of the clip within the video folder. |
behavior |
walking |
X3D-predicted action/behavior label. |
confidence |
0.92 |
Model confidence/probability (if provided). |
start_frame |
1234 |
First frame of the segment (if provided). |
end_frame |
1450 |
Last frame of the segment (if provided). |
start_time |
00:00:41.2 |
Segment start time (if provided). |
end_time |
00:00:48.8 |
Segment end time (if provided). |
species |
Grevy |
Species label (if propagated/available). Only three potential labels: Grevy , Plain Zebra , or Giraffe . |
notes |
— |
Free-text notes or flags (optional). |
model |
x3d |
Model identifier used to label. |
model_version |
x3d_m |
Specific checkpoint/version tag (optional). |
C. Mini-scene metadata JSON (per source video)
Typical keys (presence may vary):
Key | Example | Meaning |
---|---|---|
video_id |
21_01_2023_session_5-DJI_0208 |
Folder/video identifier. This ID is used to identify and link all (meta)data associated with that source video. |
source_video |
DJI_0208.mp4 |
Original MP4 filename. |
session_date |
2023-01-21 |
Capture date (YYYY-MM-DD ). |
session_id |
session_5 |
Field session tag. |
fps |
29.97 |
Frames per second of recording. |
resolution |
[3840, 2160] |
Width × height (px) (in list format). |
duration_s |
123.45 |
Video duration (seconds). |
timezone |
Africa/Nairobi |
Local timezone of recording (UTC+3). |
generator |
mini_scene_cutter@<git-sha> |
Tool/commit that wrote the metadata. |
tracks_xml |
DJI_0208_tracks.xml |
Provenance link to the CVAT tracks file. |
D. Telemetry CSV (Airdata export)
Columns depend on Airdata export settings; common fields include:
Column (common) | Example | Meaning |
---|---|---|
UTC Timestamp |
2023-01-21 12:49:07 |
Log timestamp (UTC). |
Latitude , Longitude |
0.28123 , 37.12345 |
Aircraft location in decimal degrees. |
Altitude (m) |
68.2 |
Altitude (meters) above takeoff or MSL (per export). |
AGL (m) |
47.9 |
Above-ground level (in meters, if provided). |
Speed (m/s) |
9.4 |
Horizontal speed (meters per second). |
Heading (deg) |
135 |
Yaw/heading. |
Battery (%) |
54 |
Remaining battery percentage. |
FlyState |
P-GPS |
This indicates high-level drone status, such as Motors_Started , Assisted_Takeoff , P-GPS (positioning-gps mode), Landing . |
Distance (m) |
122.5 |
Distance from home point (in meters). Specifically, Distance = current GPS - home point GPS. |
Dataset Creation
Curation Rationale
Created to evaluate kabr-tools pipeline and conduct case studies on Grevy's landscape of fear and inter-species spatial distribution.
Source Data
Data Collection and Processing
Data collected at Mpala Research Centre, Kenya, in January 2023. The data was collected using a DJI Air 2S drone and manually annotated using CVAT. The annotations were exported as XML files.
Who are the source data producers?
Imageomics/KABR-raw-videos dataset authors.
Annotations
Annotation process
A local instance of CVAT was used to manually annotate the bounding boxes around animals in the videos. The annotations were then exported as XML files to create mini-scenes using tracks_extractor.py
. The mini-scenes were then labeled with predicted behaviors using the KABR X3D action recognition model using the miniscene2behavior.py
.
Who are the annotators?
Alison Zhong and Jenna Kline
Personal and Sensitive Information
Videos were trimmed (as needed) to remove people before annotation. Endangered species are included in the dataset, but no personal or sensitive information is included.
Considerations for Using the Data
Intended Use Cases
This dataset serves as a worked example for the kabr-tools pipeline and is specifically designed for:
- Pipeline demonstration: Showing complete end-to-end processing from raw videos to behavioral annotations.
- Method validation: Evaluating automated detection and behavior recognition against manual annotations.
- Case study research: Supporting specific research questions on Grevy's zebra landscape of fear and inter-species spatial distribution.
- Educational purposes: Teaching researchers how to use the kabr-tools pipeline with real data.
- Reproducibility: Providing a reference implementation with known inputs and outputs.
Important Data Considerations
Limited scope: This is a demonstration dataset with only 3 sessions and 15 video files, designed to illustrate methodology rather than provide comprehensive coverage.
Session heterogeneity: Each example session represents distinctly different scenarios:
- Session ex-1: Minimal complexity (2 male Grevy's zebras, open habitat)
- Session ex-2: Moderate complexity (5 Grevy's zebras, semi-open roadway habitat)
- Session ex-3: High complexity (mixed species, dense vegetation, 16 total animals)
Processing completeness: Not all videos have complete processing outputs - some lack actions/
folders, reflecting real-world pipeline execution variability.
Annotation methodology: Manual detections serve as ground truth, while behavior labels are X3D model predictions, not expert-validated behaviors.
Bias, Risks, and Limitations
Sample size limitations:
- Only 15 video files across 3 sessions
- Insufficient for statistical generalization
- Designed for demonstration, not comprehensive analysis
Species representation bias:
- Heavily weighted toward Grevy's zebras (endangered species focus)
- Giraffes only present in one session (Session ex-3)
- Plains zebras only in mixed-species context
- May not represent typical behavioral patterns for each species
Habitat and temporal constraints:
- Single location (Mpala Research Centre, Kenya)
- 3-day collection window (January 18-21, 2023)
- Limited environmental and seasonal variability
- Habitat types may not represent species' full range
Technical processing limitations:
- X3D behavior predictions are automated, not expert-validated
- Mini-scene extraction dependent on manual annotation quality
- Telemetry synchronization with video timestamps may require adjustment
- Some videos lack complete behavioral annotation outputs
Methodological constraints:
- Manual annotations by only 2 annotators (potential inter-annotator variability)
- CVAT tracking may have limitations in dense vegetation (Session ex-3)
- Behavior model trained on different dataset, may not generalize perfectly
Recommendations
For pipeline evaluation and development:
- Use manual detections in
detections/*.xml
as ground truth for automated detection validation - Compare processing outputs across sessions to understand pipeline performance in different scenarios
- Use Session ex-1 (simple) for initial testing, Session ex-3 (complex) for stress testing
- Validate timestamp alignment between telemetry and video data before spatial analysis
For case study research:
- Landscape of fear studies: Focus on Grevy's zebra data from Sessions ex-1 and ex-2; use telemetry data to correlate spatial position with behaviors
- Inter-species analysis: Use Session ex-3 mixed-species data; consider habitat complexity when interpreting interactions
- Account for small sample sizes in statistical analyses and interpretation
For educational use:
- Start with Session ex-1 data for learning pipeline basics
- Progress through sessions in order of increasing complexity
- Use metadata files to understand processing provenance
- Examine both successful and incomplete processing examples
Technical recommendations:
- Verify file completeness before analysis (not all videos have
actions/
folders) - Check CSV headers as column names may vary between exports
- Use metadata JSON files to understand video-specific processing parameters
- Cross-reference telemetry timestamps with video timing for spatial-behavioral analysis
Data interpretation cautions:
- Treat X3D behavior predictions as model outputs, not ground truth
- Consider habitat context when interpreting behavioral patterns
- Account for species-specific behavioral repertoires in analysis
- Use this dataset to understand methodology, not to draw broad ecological conclusions
References
- Original KABR mini-scene dataset: https://huggingface.co/datasets/imageomics/KABR
- KABR Raw Videos (not processed for KABR mini-scene dataset): https://huggingface.co/datasets/imageomics/KABR-raw-videoss
- kabr-tools repository: https://github.com/Imageomics/kabr-tools
- Mpala Research Centre: https://mpala.org/
Licensing Information
This dataset is dedicated to the public domain for the benefit of scientific pursuits under the CC0 1.0 Universal Public Domain Dedication. We ask that you cite the dataset and related publications using the citations below if you make use of it in your research.
Citation
BibTeX:
Dataset
@misc{KABR_worked_example,
author = {Zhong, Alison and Kline, Jenna and Kholiavchenko, Maksim and Stevens, Sam and Sheets, Alec and Babu, Reshma and Banerji, Namrata and Campolongo, Elizabeth and Thompson, Matthew and Van Tiel, Nina and Miliko, Jackson and Rosser, Neil and Stewart, Charles and Berger-Wolf, Tanya and Rubenstein, Daniel},
title = {KABR Worked Example: Manually Annotated Detections and Behavioral Analysis for Kenyan Wildlife Pipeline Demonstration},
year = {2025},
url = {https://huggingface.co/datasets/imageomics/kabr-worked-example},
publisher = {Hugging Face},
doi = { }
}
Related Publications
@inproceedings{kholiavchenko2024kabr,
title={KABR: In-Situ Dataset for Kenyan Animal Behavior Recognition from Drone Videos},
author={Kholiavchenko, Maksim and Kline, Jenna and Ramirez, Michelle and Stevens, Sam and Sheets, Alec and Babu, Reshma and Banerji, Namrata and Campolongo, Elizabeth and Thompson, Matthew and Van Tiel, Nina and Miliko, Jackson and Bessa, Eduardo and Duporge, Isla and Berger-Wolf, Tanya and Rubenstein, Daniel and Stewart, Charles},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={31-40},
year={2024}
}
kabr-tools manuscript (in preparation)
@article{kabr_tools_manuscript,
title={kabr-tools: An Open-Source Pipeline for Automated Wildlife Behavior Analysis from Drone Videos},
author={Jenna Kline and Maksim Kholiavchenko and Samuel Stevens and Nina van Tiel and Namrata Banerji and Matthew Thompson and Elizabeth Campolongo and Michelle Ramirez and Alec Sheets and Alison Zhong and Sowbaranika Balasubramaniam and Isla Duporge and Jackson Miliko and Neil Rosser and Tanya Berger-Wolf and Charles V. Stewart and Daniel I. Rubenstein},
journal={[Journal name]},
year={[Year]},
note={Manuscript in preparation}
}
Please also cite the original data source:
- KABR Raw Videos: https://huggingface.co/datasets/imageomics/KABR-raw-videoss
Contributions
This work was supported by the Imageomics Institute, which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Additional support was provided by the AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment (ICICLE), funded by the US National Science Foundation under Award #2112606.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
The raw data fed into the KABR tools pipeline to produce this worked example was collected at the Mpala Research Centre in Kenya, in accordance with Research License No. NACOSTI/P/22/18214. The data collection protocol adhered strictly to the guidelines set forth by the Institutional Animal Care and Use Committee under permission No. IACUC 1835F.
Dataset Creation Contributors
- Data Collection: Field team at Mpala Research Centre, Kenya
- Manual Annotations: Alison Zhong and Jenna Kline
- Pipeline Development: kabr-tools development team
- Behavioral Analysis: X3D model predictions using KABR-trained models
- Data Curation: Alison Zhong and Jenna Kline
- Quality Assurance: Imageomics Institute research team
Glossary
Mini-scene: Short video clips (typically 5-10 seconds) extracted from source videos, centered on individual animals based on tracking annotations.
Mo Kenya: A big hill to the north of Mpala.
CVAT: Computer Vision Annotation Tool - open-source software used for manual video annotation and object tracking.
X3D: 3D CNN architecture used for video-based action recognition, adapted for animal behavior classification in the KABR project. Model: Imageomics/X3D-KABR-Kinetics.
Track: A sequence of bounding boxes following a single animal across multiple video frames.
Telemetry: Flight data recorded by the drone during video capture, including GPS coordinates, altitude, speed, and battery status.
Session: A discrete data collection period, typically representing one flight or filming session on a specific date.
More Information
For detailed usage instructions and code examples, see the kabr-tools repository and associated docs.
For questions about the broader KABR project and related datasets, visit the Imageomics Institute website and see the KABR Collection.
This dataset is part of a larger effort to develop automated methods for wildlife monitoring and conservation using computer vision and machine learning techniques.
Dataset Card Authors
Jenna Kline
Dataset Card Contact
kline dot 377 at osu dot edu
- Downloads last month
- 36