Upload folder using huggingface_hub
Browse files- .gitattributes +59 -59
- LICENCE +17 -0
- Readme.md +145 -0
.gitattributes
CHANGED
@@ -1,59 +1,59 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.mds filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
28 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.tar filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
33 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
37 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
38 |
-
# Audio files - uncompressed
|
39 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
40 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
41 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
42 |
-
# Audio files - compressed
|
43 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
44 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
45 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
46 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
47 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
48 |
-
# Image files - uncompressed
|
49 |
-
*.bmp filter=lfs diff=lfs merge=lfs -text
|
50 |
-
*.gif filter=lfs diff=lfs merge=lfs -text
|
51 |
-
*.png filter=lfs diff=lfs merge=lfs -text
|
52 |
-
*.tiff filter=lfs diff=lfs merge=lfs -text
|
53 |
-
# Image files - compressed
|
54 |
-
*.jpg filter=lfs diff=lfs merge=lfs -text
|
55 |
-
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
56 |
-
*.webp filter=lfs diff=lfs merge=lfs -text
|
57 |
-
# Video files - compressed
|
58 |
-
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
-
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.mds filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
28 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
37 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
38 |
+
# Audio files - uncompressed
|
39 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
40 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
41 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
42 |
+
# Audio files - compressed
|
43 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
44 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
45 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
46 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
47 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
48 |
+
# Image files - uncompressed
|
49 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
50 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
51 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
52 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
53 |
+
# Image files - compressed
|
54 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
55 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
56 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
57 |
+
# Video files - compressed
|
58 |
+
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
+
*.webm filter=lfs diff=lfs merge=lfs -text
|
LICENCE
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# License
|
2 |
+
|
3 |
+
Copyright (c) 2025 @ProgramerSalar
|
4 |
+
|
5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy of this dataset and associated documentation files (the "Dataset"), to deal in the Dataset without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Dataset, and to permit persons to whom the Dataset is furnished to do so, subject to the following conditions:
|
6 |
+
|
7 |
+
1. **Attribution**: You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
|
8 |
+
|
9 |
+
2. **Non-Commercial Use**: The Dataset is provided for non-commercial purposes only. Commercial use is strictly prohibited without prior written permission from the copyright holder.
|
10 |
+
|
11 |
+
3. **No Warranty**: The Dataset is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the Dataset or the use or other dealings in the Dataset.
|
12 |
+
|
13 |
+
4. **Redistribution**: If you redistribute the Dataset, you must include this license and retain all copyright notices.
|
14 |
+
|
15 |
+
For any questions or permissions regarding this license, please contact [email protected].
|
16 |
+
|
17 |
+
---
|
Readme.md
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Dataset Documentation
|
2 |
+
|
3 |
+
## Overview
|
4 |
+
|
5 |
+
This dataset is designed to support machine learning and data analysis tasks. It consists of two compressed archives: `train.tar` and `test.tar`. These archives contain data for training and testing purposes, respectively. The dataset is structured to facilitate easy integration into machine learning pipelines and other data-driven workflows.
|
6 |
+
|
7 |
+
---
|
8 |
+
|
9 |
+
## Dataset Contents
|
10 |
+
|
11 |
+
### 1. `train.tar`
|
12 |
+
The `train.tar` archive contains the training data required to build and train machine learning models. This data is typically used to teach models to recognize patterns, make predictions, or classify data points.
|
13 |
+
|
14 |
+
- **Purpose**: Training machine learning models.
|
15 |
+
- **Contents**: The archive includes multiple files (or directories) that represent the training dataset. Each file may correspond to a specific data sample, feature set, or label.
|
16 |
+
|
17 |
+
### 2. `test.tar`
|
18 |
+
The `test.tar` archive contains the testing data used to evaluate the performance of trained models. This data is separate from the training set to ensure unbiased evaluation.
|
19 |
+
|
20 |
+
- **Purpose**: Testing and validating machine learning models.
|
21 |
+
- **Contents**: Similar to the training archive, this archive includes files (or directories) that represent the testing dataset.
|
22 |
+
|
23 |
+
---
|
24 |
+
|
25 |
+
## File Structure
|
26 |
+
|
27 |
+
After extracting the `.tar` files, the dataset will have the following structure:
|
28 |
+
|
29 |
+
```
|
30 |
+
dataset/
|
31 |
+
├── train/
|
32 |
+
│ ├── file1.ext
|
33 |
+
│ ├── file2.ext
|
34 |
+
│ └── ...
|
35 |
+
└── test/
|
36 |
+
├── file1.ext
|
37 |
+
├── file2.ext
|
38 |
+
└── ...
|
39 |
+
```
|
40 |
+
|
41 |
+
- **`train/`**: Contains training data files.
|
42 |
+
- **`test/`**: Contains testing data files.
|
43 |
+
|
44 |
+
---
|
45 |
+
|
46 |
+
## How to Use the Dataset
|
47 |
+
|
48 |
+
### Step 1: Extract the Archives
|
49 |
+
To access the dataset, you need to extract the contents of the `.tar` files. Use the following commands:
|
50 |
+
|
51 |
+
```bash
|
52 |
+
tar -xvf train.tar
|
53 |
+
tar -xvf test.tar
|
54 |
+
```
|
55 |
+
|
56 |
+
This will create two directories: `train/` and `test/`.
|
57 |
+
|
58 |
+
### Step 2: Load the Data
|
59 |
+
Once extracted, you can load the data into your preferred programming environment. For example, in Python:
|
60 |
+
|
61 |
+
```python
|
62 |
+
import os
|
63 |
+
|
64 |
+
# Define paths
|
65 |
+
train_path = "train/"
|
66 |
+
test_path = "test/"
|
67 |
+
|
68 |
+
# List files in the training directory
|
69 |
+
train_files = os.listdir(train_path)
|
70 |
+
print("Training Files:", train_files)
|
71 |
+
|
72 |
+
# List files in the testing directory
|
73 |
+
test_files = os.listdir(test_path)
|
74 |
+
print("Testing Files:", test_files)
|
75 |
+
```
|
76 |
+
|
77 |
+
### Step 3: Integrate with Your Workflow
|
78 |
+
You can now use the data for training and testing machine learning models. Ensure that you preprocess the data as needed (e.g., normalization, feature extraction, etc.).
|
79 |
+
|
80 |
+
---
|
81 |
+
|
82 |
+
## Dataset Characteristics
|
83 |
+
|
84 |
+
- **Size**: The size of the dataset depends on the contents of the `train.tar` and `test.tar` archives.
|
85 |
+
- **Format**: The files within the archives may be in formats such as `.csv`, `.txt`, `.json`, or others, depending on the dataset's design.
|
86 |
+
- **Labels**: If the dataset is labeled, the labels will typically be included in the training and testing files or in a separate metadata file.
|
87 |
+
|
88 |
+
---
|
89 |
+
|
90 |
+
## Best Practices
|
91 |
+
|
92 |
+
1. **Data Splitting**: Ensure that the training and testing data are not mixed to maintain the integrity of model evaluation.
|
93 |
+
2. **Preprocessing**: Apply appropriate preprocessing steps to the data, such as cleaning, normalization, or augmentation.
|
94 |
+
3. **Version Control**: If you modify the dataset, maintain version control to track changes and ensure reproducibility.
|
95 |
+
|
96 |
+
---
|
97 |
+
|
98 |
+
## Licensing and Usage
|
99 |
+
|
100 |
+
Please review the licensing terms associated with this dataset before use. Ensure compliance with any restrictions or requirements.
|
101 |
+
|
102 |
+
---
|
103 |
+
|
104 |
+
## Citation
|
105 |
+
|
106 |
+
If you use this dataset in your research or project, please cite it as follows:
|
107 |
+
|
108 |
+
```
|
109 |
+
[Dataset Name]. Provided by [Dataset Provider]. Retrieved from [Source URL].
|
110 |
+
```
|
111 |
+
|
112 |
+
---
|
113 |
+
|
114 |
+
## Frequently Asked Questions (FAQ)
|
115 |
+
|
116 |
+
### 1. How do I extract the `.tar` files?
|
117 |
+
Use the `tar` command in a terminal or a file extraction tool that supports `.tar` archives.
|
118 |
+
|
119 |
+
### 2. What format are the data files in?
|
120 |
+
The format of the data files depends on the specific dataset. Common formats include `.csv`, `.txt`, `.json`, and others.
|
121 |
+
|
122 |
+
### 3. Can I use this dataset for commercial purposes?
|
123 |
+
Refer to the licensing section to determine whether commercial use is permitted.
|
124 |
+
|
125 |
+
---
|
126 |
+
|
127 |
+
## Support
|
128 |
+
|
129 |
+
If you encounter any issues or have questions about the dataset, please contact the dataset provider or refer to the official documentation.
|
130 |
+
|
131 |
+
---
|
132 |
+
|
133 |
+
## Acknowledgments
|
134 |
+
|
135 |
+
We would like to thank the contributors and maintainers of this dataset for their efforts in creating and sharing this resource.
|
136 |
+
|
137 |
+
---
|
138 |
+
|
139 |
+
## Change Log
|
140 |
+
|
141 |
+
- **Version 1.0**: Initial release of the dataset.
|
142 |
+
|
143 |
+
---
|
144 |
+
|
145 |
+
Thank you for using this dataset! We hope it proves valuable for your projects and research.
|