--- language: [en] tags: - audio - codec classification - audio forensics license: cc-by-4.0 datasets: [] task_categories: - audio-classification pretty_name: Hidden Compression Artifact Detection in Audio Codecs --- # Encoding Detection This dataset is designed for the forensic detection of hidden compression artifacts in audio that has been re-encoded with various codecs. It supports supervised learning tasks to identify the codec used (WAV, MP3, AAC) based on subtle compression patterns in 5-second audio clips. ## Dataset Details ### Dataset Description This dataset contains audio segments derived from clean speech recordings, which have been re-encoded using three different codecs: WAV, MP3, and AAC (Opus was excluded due to issues). All files are resampled to 16kHz and segmented into 5-second chunks. The dataset supports training and evaluation of models for detecting encoding artifacts introduced during lossy audio compression. - **Curated by:** Deep Das - **Funded by [optional]:** Not funded - **Shared by [optional]:** Deep Das - **Language(s) (NLP):** English - **License:** CC BY 4.0 ### Dataset Sources - **Repository:** [https://github.com/THE-DEEPDAS/Encoding-Detection] ## Uses ### Direct Use This dataset is intended for training and evaluating machine learning models to classify audio based on encoding artifacts introduced by different compression algorithms. It is especially useful in forensic and signal processing research. ### Out-of-Scope Use This dataset should not be used to identify individuals or for speaker recognition. It should not be used in applications involving personally identifiable information or biometric identification. ## Dataset Structure Each audio file is a 5-second, 16kHz segment. Files are stored in directories named after their codec label (e.g., `wav/`, `mp3/`, `aac/`). Metadata includes file path and label. ## Dataset Creation ### Curation Rationale Lossy audio codecs introduce subtle compression artifacts that can persist through re-encoding. This dataset aims to enable machine learning-based detection of such artifacts, which has applications in digital forensics. ### Source Data #### Data Collection and Processing - Audio was originally sourced from public clean speech datasets. - Files were converted into 5-second chunks. - Each chunk was encoded with three codecs (WAV, MP3, AAC) using ffmpeg. - Preprocessing included normalization, denoising, and silence removal. #### Who are the source data producers? Clean voice data sourced from public datasets like LibriSpeech, Common Voice, etc. ### Annotations #### Annotation process Codec labels were automatically assigned based on directory structure during preprocessing. #### Who are the annotators? No human annotation was used; labels were programmatically assigned. #### Personal and Sensitive Information The dataset contains no personal, sensitive, or private data. ## Bias, Risks, and Limitations The dataset contains only clean voice signals, and may not generalize well to real-world recordings that include background noise, reverberation, or music. Only 3 codecs are represented. ### Recommendations Use with caution when applying models trained on this dataset to real-world audio. Consider domain adaptation techniques. ## Glossary - **Encoding Artifact**: A distortion introduced by lossy compression that alters the original waveform. - **Re-Encoding**: The process of compressing audio again, possibly with a different codec. - **Lossy Codec**: An audio format that reduces file size by removing parts of the signal (e.g., MP3, AAC). ## More Information Contact: [u23ai052@coed.svnit.ac.in] ## Dataset Card Authors Deep Das ## Dataset Card Contact u23ai052@coed.svnit.ac.in