Datasets:
image
imagewidth (px) 379
1k
| label
class label 3
classes |
---|---|
0ID1
|
|
0ID1
|
|
0ID1
|
|
0ID1
|
|
0ID1
|
|
0ID1
|
|
1ID2
|
|
1ID2
|
|
1ID2
|
|
1ID2
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
0ID1
|
|
0ID1
|
|
0ID1
|
|
0ID1
|
|
0ID1
|
|
0ID1
|
|
1ID2
|
|
1ID2
|
|
1ID2
|
|
1ID2
|
|
1ID2
|
|
1ID2
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
0ID1
|
|
0ID1
|
|
0ID1
|
|
0ID1
|
|
0ID1
|
|
1ID2
|
|
1ID2
|
|
1ID2
|
|
1ID2
|
|
1ID2
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
|
2ID3
|
Liveness Detection Dataset in the Wild with High Variety
Dataset for PAD/liveness detection with three classes of presentation attacks: cardboard masks with cut-out eyes; elastic textile mesh masks with full-color face printing; full-size latex 3D masks. Mask types were selected based on real-world attack scenarios
The key value is high variety:
different actors, a large number of individual masks, various lighting (indoor/outdoor, natural/artificial light), backgrounds and locations, shooting angles and distances, different cameras/lenses, accessories (glasses/headwear/beards). This range reduces retraining on specific mask textures and increases generalizability in passive liveness detection and attack type recognition
Types of attacks:
Flat face print; eyes cut out to allow for real eye movement. Flat image with hard edges around the outline
Nylon/elastic fabric mask; full-color face print, fabric fits snugly around the head and partially replicates the 3D shape
Full-size realistic masks with pronounced “skin” texture. Sometimes they have cut-out eyes and are complemented by external attributes
Why is this combination of attacks optimal?
They cover three different types of attacks (flat printing → textile 3D deformation → realistic 3D object), which allows testing the resistance of models to increasing attack realism
Why in-the-wild collection improves robustness
Crowd-sourced, in-the-wild data covers the natural long-tail diversity of real scenarios, so models trained on it generalize better and rely less on hidden shortcuts typical of controlled, in-office datasets
How in-the-wild fundamentally differs from in-house
- Heterogeneous hardware. Smartphones/webcams across brands and generations → different sensors, optics, ISP pipelines, noise/exposure, stabilization, frame rates, and codecs
- Unstaged conditions. Random lighting (daylight/artificial, flicker, backlight), backgrounds, reflections, shadows, and outdoor/indoor scenes
- Human behavior. Natural poses, micro-movements, expressions, speed and amplitude of head turns, varying camera distance
- Broad spectrum of PAIs/masks. Different materials, shapes, and application methods; real-world artifacts (glare, folds, misalignment); plus “imperfect” spoofing attempts
- Demographic and cultural diversity. Skin tones, makeup, accessories (glasses, headwear), styles—rarely covered in office setups
In in-house collected datasets, even with artificial variation of backgrounds/angles, common constants remain: the same camera pool, typical lighting and backgrounds, repeated rooms and collection crews. Models quickly latch onto these spurious cues (e.g., characteristic white balance or wall texture), which harms transfer to real-world conditions
Because the sources of variability differ, in-the-wild and in-house datasets have weak overlap. This makes our dataset a valuable external test bed: if a model performs well here, the likelihood of failure in real production is substantially lower
Full version of dataset is availible for commercial usage - leave a request on our website Axonlabs to purchase the dataset 💰
Best Uses
This dataset is ideal for entities striving to meet or exceed iBeta Level 2 certification. By integrating this dataset, organizations can greatly enhance the training effectiveness of anti-spoofing algorithms, ensuring a robust and accurate performance in practical scenarios
- Downloads last month
- 97