Text Generation
Transformers
GGUF
Tabular Classification
aashish1904 commited on
Commit
0c3b92f
·
verified ·
1 Parent(s): ce9fbc7

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +205 -0
README.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - Qwen/Qwen2.5-7B-Instruct
6
+ license: apache-2.0
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ datasets:
10
+ - MachineLearningLM/machinelearninglm-scm-synthetic-tabularml
11
+ tags:
12
+ - Tabular Classification
13
+
14
+ ---
15
+
16
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
17
+
18
+
19
+ # QuantFactory/MachineLearningLM-7B-v1-GGUF
20
+ This is quantized version of [MachineLearningLM/MachineLearningLM-7B-v1](https://huggingface.co/MachineLearningLM/MachineLearningLM-7B-v1) created using llama.cpp
21
+
22
+ # Original Model Card
23
+
24
+
25
+ # MachineLearningLM
26
+
27
+ This repository contains the model presented in the paper [MachineLearningLM: Scaling Many-shot In-context Learning via Continued Pretraining](https://huggingface.co/papers/2509.06806).
28
+
29
+ ## Model Summary
30
+
31
+ Can LLMs learn from 1,000 in-context examples?
32
+
33
+ Introducing **MachineLearningLM** 🧪📊 — a model continuously pretrained on millions of synthetic tabular ML tasks, enabling robust many-shot in-context learning.
34
+
35
+ 📈 **Scales from 8 to 1,024 examples**
36
+
37
+ 📈 ​**​~15% improvement​**​ on unseen tabular tasks compared to o3-mini / GPT-5-mini / Qwen-2.5-7B-Instruct
38
+
39
+ 🌲 ​**​Random-Forest–level numerical modeling robustness​**​
40
+
41
+ 🧠 ​**​MMLU score: 75.4%​**​
42
+
43
+ 📄 Read the paper: https://huggingface.co/papers/2509.06806
44
+
45
+ GitHub: https://github.com/HaoAreYuDong/MachineLearningLM
46
+
47
+ ## Evaluation and Validation
48
+
49
+ We have developed an automated evaluation framework — simply configure the parameters to easily perform validation and evaluation.
50
+ **The code is now open-sourced at our [GitHub repository](https://github.com/HaoAreYuDong/MachineLearningLM).**
51
+
52
+ **Quick Start**
53
+
54
+ ```bash
55
+ pip install -r requirements.txt
56
+ python ./src/evaluation/model_pred/dl_model_pred.py \
57
+ --input_dir ./demo_input.jsonl \
58
+ --output_dir ./demo_output.jsonl \
59
+ --model_name MachineLearningLM/MachineLearningLM-7B-v1
60
+ ```
61
+ **Pipeline**
62
+ ```bash
63
+ # modify the evaluate_parameters.sh file
64
+ source evaluate_parameters.sh
65
+
66
+ # Option 1 End-to-End Pipeline
67
+ ./scripts/evaluate_pipeline.sh
68
+
69
+ # Option 2 Parallel Processing
70
+ ./scripts/multi_process/data_prep.sh
71
+ ./scripts/multi_process/prompt_gen.sh # For deep learning only
72
+ ./scripts/multi_process/model_pred.sh
73
+ ./scripts/multi_process/evaluation.sh
74
+ ./scripts/multi_process/report.sh
75
+
76
+ # Option3 Sequential Processing
77
+ ./scripts/single_process/data_prep.sh
78
+ ./scripts/single_process/prompt_gen.sh # For deep learning only
79
+ ./scripts/single_process/model_pred.sh
80
+ ./scripts/single_process/evaluation.sh
81
+ ./scripts/single_process/report.sh
82
+ ```
83
+
84
+ For more usage details, please visit our GitHub.
85
+
86
+ **Quants of Checkpoints**
87
+
88
+ https://huggingface.co/mradermacher/MachineLearningLM-7B-v1-GGUF
89
+
90
+
91
+ ## Tabicl Evaluation
92
+
93
+ **This part of the code needs to run in an environment with the tabicl and openpyxl libraries installed.**
94
+
95
+ The evaluation code for tabicl is placed separately in the `./src/evaluation/tabicl_evaluate.py` file. Use `./scripts/tabicl_evaluate.sh` to obtain the evaluation results for tabicl.
96
+
97
+ Use --datasets to specify the datasets to be evaluated, and --sample_sizes to indicate the number of shots.
98
+
99
+ If multiple datasets need to be evaluated, separate them with spaces. To evaluate all CSV files in the input folder, use **all**.
100
+
101
+ ## Prior_data
102
+
103
+ MachineLearningLM uses the code from tabicl to generate prior data.
104
+
105
+ Use `./scripts/generate_data.sh` to generate the prior data. It generates the corresponding .pt and .csv files, and normalizes the feature values in the CSV files to the range of 0–999, as we did in the paper.
106
+
107
+ ### Parameter Introduction(refer to the comments in the file `tabicl\src\tabicl\prior\dataset.py`)
108
+
109
+ **Data Scale & Structure**
110
+
111
+ | Parameter | Type | Description |
112
+ | :------------- | :--- | :------------------------------------------------------ |
113
+ | `min_features` | int | Minimum number of features per dataset |
114
+ | `max_features` | int | Maximum number of features per dataset |
115
+ | `max_classes` | int | Maximum number of target classes |
116
+ | `min_seq_len` | int | Minimum samples per dataset. Uses `max_seq_len` if None |
117
+ | `max_seq_len` | int | Maximum samples per dataset (Not Include) |
118
+
119
+ **Batch Configuration**
120
+
121
+ | Parameter | Type | Description |
122
+ | :--------------------- | :--- | :----------------------------------------------------------- |
123
+ | `batch_size` | int | Total number of datasets to generate per batch |
124
+ | `batch_size_per_gp` | int | Number of datasets per group (shared characteristics) |
125
+ | `batch_size_per_subgp` | int | Number of datasets per subgroup (similar causal structures). Defaults to `batch_size_per_gp` if None |
126
+
127
+ **Sequence Length Control**
128
+
129
+ | Parameter | Type | Description |
130
+ | :--------------- | :--- | :----------------------------------------------------------- |
131
+ | `log_seq_len` | bool | Sample sequence length from log-uniform distribution if True |
132
+ | `seq_len_per_gp` | bool | Sample sequence length per group (enables variable-sized datasets) |
133
+ | `replay_small` | bool | Occasionally sample smaller sequences for model robustness |
134
+
135
+ **Train-Test Split**
136
+
137
+ | Parameter | Type | Description |
138
+ | :--------------- | :-------- | :----------------------------------------------------------- |
139
+ | `min_train_size` | int/float | Start position/ratio for train split (int: absolute, float: fractional) |
140
+ | `max_train_size` | int/float | End position/ratio for train split (int: absolute, float: fractional) |
141
+
142
+ **Generation Method**
143
+
144
+ | Parameter | Type | Description |
145
+ | :----------- | :--- | :----------------------------------------------------------- |
146
+ | `prior_type` | str | Prior type: 'mlp_scm', 'tree_scm', or 'mix_scm' (random selection) |
147
+ | `fixed_hp` | dict | Fixed structural configuration parameters |
148
+ | `sampled_hp` | dict | Parameters sampled during generation |
149
+
150
+ **Computation Settings**
151
+
152
+ | Parameter | Type | Description |
153
+ | :------------------------- | :--- | :------------------------------------------------ |
154
+ | `n_jobs` | int | Number of parallel jobs (-1 = use all processors) |
155
+ | `num_threads_per_generate` | int | Number of threads per generation job |
156
+ | `device` | str | Computation device ('cpu' or 'cuda') |
157
+
158
+ ## Train
159
+
160
+ MachineLearningLM uses the LLaMA-Factory framework for training.
161
+
162
+ #### Training Environment Configuration
163
+
164
+ ```bash
165
+ cd ./third_party/LLaMA-Factory
166
+ pip install -e ".[torch,metrics]" --no-build-isolation
167
+ pip install wandb
168
+ ```
169
+
170
+ Use `./scripts/train.sh` for training.
171
+
172
+ ## Project Structure
173
+
174
+ ```
175
+ MachineLearningLM/
176
+ ├──src/
177
+ | ├──evaluation/
178
+ │ │ ├── data_prep/ # Data preprocessing and chunking utilities
179
+ │ │ ├── prompt_gen/ # Prompt generation for deep learning models
180
+ │ │ ├── model_pred/ # Model inference (ML and DL prediction engines)
181
+ │ │ ├── result_proc/ # 5-layer evaluation architecture and metrics processing
182
+ │ │ ├── zero_summary/ # Result summarization and report generation
183
+ │ │ └── tabicl_evaluate.py
184
+ │ └──prior_data
185
+ │ └── pt_to_csv.py
186
+ ├── scripts/
187
+ │ ├── single_process/ # Sequential execution shell scripts
188
+ │ ├── multi_process/ # Parallel execution shell scripts (with _mp suffix)
189
+ │ ├── evaluate_parameters.sh # Global parameter configuration
190
+ | ├── evaluate_pipeline.sh # automated pipeline
191
+ | ├── generate_data.sh
192
+ | ├── tabicl_evaluate.sh
193
+ | └── train.sh
194
+ ├── datahub_inputs/
195
+ │ ├── data_demo/ # Demo datasets for testing
196
+ │ └── data_raw/ # Raw input datasets
197
+ ├── third_party/
198
+ │ ├── tabicl/
199
+ │ └── LLaMA-Factory/
200
+ ├── requirements.txt # Python dependencies for Evaluation Framework
201
+ ├── README.md
202
+ ├── README_zh.md
203
+ ├── THIRD_PARTY_NOTICES.md
204
+ └── LICENSE
205
+ ```