|
/opt/conda/envs/py310/bin/python -m mlc_llm gen_config /models/Mistral-7B-Instruct-v0.3 --quantization q0f16 --conv-template mistral_default --output /models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC |
|
[2024-06-02 06:17:11] INFO auto_config.py:116: [92mFound[0m model configuration: /models/Mistral-7B-Instruct-v0.3/config.json |
|
[2024-06-02 06:17:11] INFO auto_config.py:154: [92mFound[0m model type: [1mmistral[0m. Use `--model-type` to override. |
|
[2024-06-02 06:17:11] INFO mistral_model.py:56: [1mprefill_chunk_size[0m defaults to 2048 |
|
[2024-06-02 06:17:11] INFO config.py:107: Overriding [1mmax_batch_size[0m from 1 to 80 |
|
[2024-06-02 06:17:11] INFO gen_config.py:143: [generation_config.json] Setting [1mbos_token_id[0m: 1 |
|
[2024-06-02 06:17:11] INFO gen_config.py:143: [generation_config.json] Setting [1meos_token_id[0m: 2 |
|
[2024-06-02 06:17:11] INFO gen_config.py:155: [92mFound[0m tokenizer config: /models/Mistral-7B-Instruct-v0.3/tokenizer.model. Copying to [1m/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC/tokenizer.model[0m |
|
[2024-06-02 06:17:11] INFO gen_config.py:155: [92mFound[0m tokenizer config: /models/Mistral-7B-Instruct-v0.3/tokenizer.json. Copying to [1m/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC/tokenizer.json[0m |
|
[2024-06-02 06:17:11] INFO gen_config.py:157: [91mNot found[0m tokenizer config: /models/Mistral-7B-Instruct-v0.3/vocab.json |
|
[2024-06-02 06:17:11] INFO gen_config.py:157: [91mNot found[0m tokenizer config: /models/Mistral-7B-Instruct-v0.3/merges.txt |
|
[2024-06-02 06:17:11] INFO gen_config.py:157: [91mNot found[0m tokenizer config: /models/Mistral-7B-Instruct-v0.3/added_tokens.json |
|
[2024-06-02 06:17:11] INFO gen_config.py:155: [92mFound[0m tokenizer config: /models/Mistral-7B-Instruct-v0.3/tokenizer_config.json. Copying to [1m/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC/tokenizer_config.json[0m |
|
[2024-06-02 06:17:11] INFO gen_config.py:216: Detected tokenizer info: {'token_postproc_method': 'byte_fallback', 'prepend_space_in_encode': False, 'strip_space_in_decode': True} |
|
[2024-06-02 06:17:11] INFO gen_config.py:32: [System default] Setting [1mpad_token_id[0m: 0 |
|
[2024-06-02 06:17:11] INFO gen_config.py:32: [System default] Setting [1mtemperature[0m: 1.0 |
|
[2024-06-02 06:17:11] INFO gen_config.py:32: [System default] Setting [1mpresence_penalty[0m: 0.0 |
|
[2024-06-02 06:17:11] INFO gen_config.py:32: [System default] Setting [1mfrequency_penalty[0m: 0.0 |
|
[2024-06-02 06:17:11] INFO gen_config.py:32: [System default] Setting [1mrepetition_penalty[0m: 1.0 |
|
[2024-06-02 06:17:11] INFO gen_config.py:32: [System default] Setting [1mtop_p[0m: 1.0 |
|
[2024-06-02 06:17:11] INFO gen_config.py:32: [System default] Setting [1mmean_gen_len[0m: 128 |
|
[2024-06-02 06:17:11] INFO gen_config.py:32: [System default] Setting [1mmax_gen_len[0m: 512 |
|
[2024-06-02 06:17:11] INFO gen_config.py:32: [System default] Setting [1mshift_fill_factor[0m: 0.3 |
|
[2024-06-02 06:17:11] INFO gen_config.py:223: Dumping configuration file to: [1m/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC/mlc-chat-config.json[0m |
|
/opt/conda/envs/py310/bin/python -m mlc_llm convert_weight /models/Mistral-7B-Instruct-v0.3 --quantization q0f16 --output /models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC |
|
[2024-06-02 06:17:13] INFO auto_config.py:116: [92mFound[0m model configuration: /models/Mistral-7B-Instruct-v0.3/config.json |
|
[2024-06-02 06:17:14] INFO auto_device.py:79: [92mFound[0m device: cuda:0 |
|
[2024-06-02 06:17:16] INFO auto_device.py:88: [91mNot found[0m device: rocm:0 |
|
[2024-06-02 06:17:17] INFO auto_device.py:88: [91mNot found[0m device: metal:0 |
|
[2024-06-02 06:17:19] INFO auto_device.py:79: [92mFound[0m device: vulkan:0 |
|
[2024-06-02 06:17:19] INFO auto_device.py:79: [92mFound[0m device: vulkan:1 |
|
[2024-06-02 06:17:19] INFO auto_device.py:79: [92mFound[0m device: vulkan:2 |
|
[2024-06-02 06:17:19] INFO auto_device.py:79: [92mFound[0m device: vulkan:3 |
|
[2024-06-02 06:17:21] INFO auto_device.py:88: [91mNot found[0m device: opencl:0 |
|
[2024-06-02 06:17:21] INFO auto_device.py:35: Using device: [1mcuda:0[0m |
|
[2024-06-02 06:17:21] INFO auto_weight.py:71: Finding weights in: /models/Mistral-7B-Instruct-v0.3 |
|
[2024-06-02 06:17:21] INFO auto_weight.py:137: [91mNot found[0m Huggingface PyTorch |
|
[2024-06-02 06:17:21] INFO auto_weight.py:144: [92mFound[0m source weight format: huggingface-safetensor. Source configuration: /models/Mistral-7B-Instruct-v0.3/model.safetensors.index.json |
|
[2024-06-02 06:17:21] INFO auto_weight.py:107: Using source weight configuration: [1m/models/Mistral-7B-Instruct-v0.3/model.safetensors.index.json[0m. Use `--source` to override. |
|
[2024-06-02 06:17:21] INFO auto_weight.py:111: Using source weight format: [1mhuggingface-safetensor[0m. Use `--source-format` to override. |
|
[2024-06-02 06:17:21] INFO auto_config.py:154: [92mFound[0m model type: [1mmistral[0m. Use `--model-type` to override. |
|
[2024-06-02 06:17:21] INFO mistral_model.py:56: [1mprefill_chunk_size[0m defaults to 2048 |
|
[1mWeight conversion with arguments:[0m |
|
[1m--config[0m /models/Mistral-7B-Instruct-v0.3/config.json |
|
[1m--quantization[0m NoQuantize(name='q0f16', kind='no-quant', model_dtype='float16') |
|
[1m--model-type[0m mistral |
|
[1m--device[0m cuda:0 |
|
[1m--source[0m /models/Mistral-7B-Instruct-v0.3/model.safetensors.index.json |
|
[1m--source-format[0m huggingface-safetensor |
|
[1m--output[0m /models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC |
|
Start storing to cache /models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC |
|
0%| | 0/195 [00:00<?, ?it/s]
[2024-06-02 06:17:22] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Mistral-7B-Instruct-v0.3/model-00003-of-00003.safetensors |
|
0%| | 0/195 [00:00<?, ?it/s]
[2024-06-02 06:17:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mlm_head.weight[0m", shape: (32768, 4096), dtype: float16 |
|
0%| | 0/195 [00:07<?, ?it/s]
1%| | 1/195 [00:08<26:13, 8.11s/it]
[2024-06-02 06:17:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
1%| | 1/195 [00:08<26:13, 8.11s/it]
[2024-06-02 06:17:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
1%| | 1/195 [00:08<26:13, 8.11s/it]
2%|▏ | 3/195 [00:08<07:26, 2.32s/it]
[2024-06-02 06:17:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
2%|▏ | 3/195 [00:09<07:26, 2.32s/it]
2%|▏ | 4/195 [00:10<06:36, 2.07s/it]
[2024-06-02 06:17:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
2%|▏ | 4/195 [00:10<06:36, 2.07s/it]
[2024-06-02 06:17:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
2%|▏ | 4/195 [00:10<06:36, 2.07s/it]
[2024-06-02 06:17:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
2%|▏ | 4/195 [00:10<06:36, 2.07s/it]
4%|▎ | 7/195 [00:10<02:59, 1.05it/s]
[2024-06-02 06:17:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
4%|▎ | 7/195 [00:11<02:59, 1.05it/s]
4%|▍ | 8/195 [00:12<03:07, 1.00s/it]
[2024-06-02 06:17:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
4%|▍ | 8/195 [00:12<03:07, 1.00s/it]
[2024-06-02 06:17:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
4%|▍ | 8/195 [00:12<03:07, 1.00s/it]
5%|▌ | 10/195 [00:12<02:03, 1.50it/s]
[2024-06-02 06:17:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
5%|▌ | 10/195 [00:12<02:03, 1.50it/s]
6%|▌ | 11/195 [00:12<01:42, 1.79it/s]
[2024-06-02 06:17:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
6%|▌ | 11/195 [00:12<01:42, 1.79it/s]
[2024-06-02 06:17:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
6%|▌ | 11/195 [00:12<01:42, 1.79it/s]
7%|▋ | 13/195 [00:13<01:22, 2.21it/s]
[2024-06-02 06:17:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
7%|▋ | 13/195 [00:13<01:22, 2.21it/s]
7%|▋ | 14/195 [00:14<01:48, 1.67it/s]
[2024-06-02 06:17:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
7%|▋ | 14/195 [00:14<01:48, 1.67it/s]
[2024-06-02 06:17:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
7%|▋ | 14/195 [00:14<01:48, 1.67it/s]
8%|▊ | 16/195 [00:14<01:15, 2.36it/s]
[2024-06-02 06:17:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
8%|▊ | 16/195 [00:14<01:15, 2.36it/s]
9%|▊ | 17/195 [00:14<01:05, 2.71it/s]
[2024-06-02 06:17:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
9%|▊ | 17/195 [00:14<01:05, 2.71it/s]
[2024-06-02 06:17:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
9%|▊ | 17/195 [00:14<01:05, 2.71it/s]
10%|▉ | 19/195 [00:15<00:59, 2.98it/s]
[2024-06-02 06:17:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
10%|▉ | 19/195 [00:15<00:59, 2.98it/s]
10%|█ | 20/195 [00:16<01:29, 1.96it/s]
[2024-06-02 06:17:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
10%|█ | 20/195 [00:16<01:29, 1.96it/s]
[2024-06-02 06:17:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
10%|█ | 20/195 [00:16<01:29, 1.96it/s]
11%|█▏ | 22/195 [00:16<01:03, 2.71it/s]
[2024-06-02 06:17:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
11%|█▏ | 22/195 [00:16<01:03, 2.71it/s]
12%|█▏ | 23/195 [00:16<00:56, 3.06it/s]
[2024-06-02 06:17:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
12%|█▏ | 23/195 [00:16<00:56, 3.06it/s]
[2024-06-02 06:17:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
12%|█▏ | 23/195 [00:17<00:56, 3.06it/s]
13%|█▎ | 25/195 [00:17<00:52, 3.22it/s]
[2024-06-02 06:17:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
13%|█▎ | 25/195 [00:17<00:52, 3.22it/s]
13%|█▎ | 26/195 [00:18<01:23, 2.03it/s]
[2024-06-02 06:17:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
13%|█▎ | 26/195 [00:18<01:23, 2.03it/s]
[2024-06-02 06:17:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
13%|█▎ | 26/195 [00:18<01:23, 2.03it/s]
14%|█▍ | 28/195 [00:18<00:59, 2.80it/s]
[2024-06-02 06:17:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
14%|█▍ | 28/195 [00:18<00:59, 2.80it/s]
15%|█▍ | 29/195 [00:18<00:52, 3.17it/s]
[2024-06-02 06:17:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
15%|█▍ | 29/195 [00:18<00:52, 3.17it/s]
[2024-06-02 06:17:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
15%|█▍ | 29/195 [00:19<00:52, 3.17it/s]
16%|█▌ | 31/195 [00:19<00:48, 3.35it/s]
[2024-06-02 06:17:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
16%|█▌ | 31/195 [00:20<00:48, 3.35it/s]
16%|█▋ | 32/195 [00:20<01:17, 2.10it/s]
[2024-06-02 06:17:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
16%|█▋ | 32/195 [00:20<01:17, 2.10it/s]
[2024-06-02 06:17:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
16%|█▋ | 32/195 [00:20<01:17, 2.10it/s]
17%|█▋ | 34/195 [00:20<00:55, 2.88it/s]
[2024-06-02 06:17:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
17%|█▋ | 34/195 [00:21<00:55, 2.88it/s]
18%|█▊ | 35/195 [00:21<00:49, 3.25it/s]
[2024-06-02 06:17:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
18%|█▊ | 35/195 [00:21<00:49, 3.25it/s]
[2024-06-02 06:17:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
18%|█▊ | 35/195 [00:21<00:49, 3.25it/s]
19%|█▉ | 37/195 [00:21<00:46, 3.42it/s]
[2024-06-02 06:17:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
19%|█▉ | 37/195 [00:22<00:46, 3.42it/s]
19%|█▉ | 38/195 [00:22<01:13, 2.12it/s]
[2024-06-02 06:17:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
19%|█▉ | 38/195 [00:22<01:13, 2.12it/s]
[2024-06-02 06:17:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
19%|█▉ | 38/195 [00:22<01:13, 2.12it/s]
21%|██ | 40/195 [00:23<00:53, 2.91it/s]
[2024-06-02 06:17:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
21%|██ | 40/195 [00:23<00:53, 2.91it/s]
21%|██ | 41/195 [00:23<00:46, 3.28it/s]
[2024-06-02 06:17:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
21%|██ | 41/195 [00:23<00:46, 3.28it/s]
[2024-06-02 06:17:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
21%|██ | 41/195 [00:23<00:46, 3.28it/s]
22%|██▏ | 43/195 [00:23<00:45, 3.37it/s]
[2024-06-02 06:17:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
22%|██▏ | 43/195 [00:24<00:45, 3.37it/s]
23%|██▎ | 44/195 [00:25<01:17, 1.94it/s]
[2024-06-02 06:17:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
23%|██▎ | 44/195 [00:25<01:17, 1.94it/s]
[2024-06-02 06:17:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
23%|██▎ | 44/195 [00:25<01:17, 1.94it/s]
24%|██▎ | 46/195 [00:25<00:56, 2.65it/s]
[2024-06-02 06:17:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
24%|██▎ | 46/195 [00:25<00:56, 2.65it/s]
24%|██▍ | 47/195 [00:25<00:49, 3.00it/s]
[2024-06-02 06:17:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
24%|██▍ | 47/195 [00:25<00:49, 3.00it/s]
[2024-06-02 06:17:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
24%|██▍ | 47/195 [00:25<00:49, 3.00it/s]
25%|██▌ | 49/195 [00:26<00:45, 3.21it/s]
[2024-06-02 06:17:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
25%|██▌ | 49/195 [00:26<00:45, 3.21it/s]
26%|██▌ | 50/195 [00:27<01:11, 2.03it/s]
[2024-06-02 06:17:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
26%|██▌ | 50/195 [00:27<01:11, 2.03it/s]
[2024-06-02 06:17:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
26%|██▌ | 50/195 [00:27<01:11, 2.03it/s]
27%|██▋ | 52/195 [00:27<00:51, 2.79it/s]
[2024-06-02 06:17:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
27%|██▋ | 52/195 [00:27<00:51, 2.79it/s]
27%|██▋ | 53/195 [00:27<00:44, 3.16it/s]
[2024-06-02 06:17:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
27%|██▋ | 53/195 [00:27<00:44, 3.16it/s]
[2024-06-02 06:17:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
27%|██▋ | 53/195 [00:27<00:44, 3.16it/s]
28%|██▊ | 55/195 [00:28<00:42, 3.31it/s]
[2024-06-02 06:17:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
28%|██▊ | 55/195 [00:28<00:42, 3.31it/s]
29%|██▊ | 56/195 [00:29<01:06, 2.08it/s]
[2024-06-02 06:17:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
29%|██▊ | 56/195 [00:29<01:06, 2.08it/s]
[2024-06-02 06:17:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
29%|██▊ | 56/195 [00:29<01:06, 2.08it/s]
30%|██▉ | 58/195 [00:29<00:48, 2.82it/s]
[2024-06-02 06:17:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
30%|██▉ | 58/195 [00:29<00:48, 2.82it/s]
30%|███ | 59/195 [00:29<00:42, 3.19it/s]
[2024-06-02 06:17:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.norm.weight[0m", shape: (4096,), dtype: float16 |
|
30%|███ | 59/195 [00:29<00:42, 3.19it/s]
[2024-06-02 06:17:51] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Mistral-7B-Instruct-v0.3/model-00003-of-00003.safetensors |
|
30%|███ | 59/195 [00:29<00:42, 3.19it/s]
[2024-06-02 06:17:52] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Mistral-7B-Instruct-v0.3/model-00001-of-00003.safetensors |
|
30%|███ | 59/195 [00:30<00:42, 3.19it/s]
[2024-06-02 06:17:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.embed_tokens.weight[0m", shape: (32768, 4096), dtype: float16 |
|
30%|███ | 59/195 [00:34<00:42, 3.19it/s]
31%|███▏ | 61/195 [00:35<02:48, 1.26s/it]
[2024-06-02 06:17:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
31%|███▏ | 61/195 [00:35<02:48, 1.26s/it]
[2024-06-02 06:17:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
31%|███▏ | 61/195 [00:35<02:48, 1.26s/it]
32%|███▏ | 63/195 [00:36<02:00, 1.09it/s]
[2024-06-02 06:17:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
32%|███▏ | 63/195 [00:36<02:00, 1.09it/s]
33%|███▎ | 64/195 [00:37<02:05, 1.05it/s]
[2024-06-02 06:17:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
33%|███▎ | 64/195 [00:37<02:05, 1.05it/s]
[2024-06-02 06:17:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
33%|███▎ | 64/195 [00:37<02:05, 1.05it/s]
34%|███▍ | 66/195 [00:37<01:28, 1.46it/s]
[2024-06-02 06:17:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
34%|███▍ | 66/195 [00:37<01:28, 1.46it/s]
34%|███▍ | 67/195 [00:37<01:13, 1.73it/s]
[2024-06-02 06:17:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
34%|███▍ | 67/195 [00:37<01:13, 1.73it/s]
[2024-06-02 06:18:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
34%|███▍ | 67/195 [00:37<01:13, 1.73it/s]
35%|███▌ | 69/195 [00:38<00:58, 2.17it/s]
[2024-06-02 06:18:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
35%|███▌ | 69/195 [00:38<00:58, 2.17it/s]
36%|███▌ | 70/195 [00:39<01:14, 1.67it/s]
[2024-06-02 06:18:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
36%|███▌ | 70/195 [00:39<01:14, 1.67it/s]
[2024-06-02 06:18:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
36%|███▌ | 70/195 [00:39<01:14, 1.67it/s]
37%|███▋ | 72/195 [00:39<00:55, 2.21it/s]
[2024-06-02 06:18:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
37%|███▋ | 72/195 [00:39<00:55, 2.21it/s]
37%|███▋ | 73/195 [00:39<00:47, 2.57it/s]
[2024-06-02 06:18:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
37%|███▋ | 73/195 [00:40<00:47, 2.57it/s]
38%|███▊ | 74/195 [00:41<01:07, 1.80it/s]
[2024-06-02 06:18:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
38%|███▊ | 74/195 [00:41<01:07, 1.80it/s]
38%|███▊ | 75/195 [00:41<00:57, 2.08it/s]
[2024-06-02 06:18:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
38%|███▊ | 75/195 [00:41<00:57, 2.08it/s]
39%|███▉ | 76/195 [00:41<00:47, 2.53it/s]
[2024-06-02 06:18:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
39%|███▉ | 76/195 [00:41<00:47, 2.53it/s]
[2024-06-02 06:18:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
39%|███▉ | 76/195 [00:41<00:47, 2.53it/s]
40%|████ | 78/195 [00:42<00:39, 2.94it/s]
[2024-06-02 06:18:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
40%|████ | 78/195 [00:42<00:39, 2.94it/s]
41%|████ | 79/195 [00:43<01:00, 1.92it/s]
[2024-06-02 06:18:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
41%|████ | 79/195 [00:43<01:00, 1.92it/s]
[2024-06-02 06:18:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
41%|████ | 79/195 [00:43<01:00, 1.92it/s]
42%|████▏ | 81/195 [00:43<00:41, 2.75it/s]
[2024-06-02 06:18:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
42%|████▏ | 81/195 [00:43<00:41, 2.75it/s]
42%|████▏ | 82/195 [00:43<00:35, 3.15it/s]
[2024-06-02 06:18:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
42%|████▏ | 82/195 [00:43<00:35, 3.15it/s]
[2024-06-02 06:18:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
42%|████▏ | 82/195 [00:43<00:35, 3.15it/s]
43%|████▎ | 84/195 [00:44<00:33, 3.35it/s]
[2024-06-02 06:18:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
43%|████▎ | 84/195 [00:44<00:33, 3.35it/s]
44%|████▎ | 85/195 [00:45<00:52, 2.10it/s]
[2024-06-02 06:18:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
44%|████▎ | 85/195 [00:45<00:52, 2.10it/s]
[2024-06-02 06:18:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
44%|████▎ | 85/195 [00:45<00:52, 2.10it/s]
45%|████▍ | 87/195 [00:45<00:37, 2.90it/s]
[2024-06-02 06:18:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
45%|████▍ | 87/195 [00:45<00:37, 2.90it/s]
45%|████▌ | 88/195 [00:45<00:32, 3.29it/s]
[2024-06-02 06:18:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
45%|████▌ | 88/195 [00:45<00:32, 3.29it/s]
[2024-06-02 06:18:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
45%|████▌ | 88/195 [00:45<00:32, 3.29it/s]
46%|████▌ | 90/195 [00:46<00:30, 3.44it/s]
[2024-06-02 06:18:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
46%|████▌ | 90/195 [00:46<00:30, 3.44it/s]
47%|████▋ | 91/195 [00:47<00:48, 2.13it/s]
[2024-06-02 06:18:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
47%|████▋ | 91/195 [00:47<00:48, 2.13it/s]
[2024-06-02 06:18:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
47%|████▋ | 91/195 [00:47<00:48, 2.13it/s]
48%|████▊ | 93/195 [00:47<00:34, 2.93it/s]
[2024-06-02 06:18:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
48%|████▊ | 93/195 [00:47<00:34, 2.93it/s]
48%|████▊ | 94/195 [00:47<00:30, 3.32it/s]
[2024-06-02 06:18:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
48%|████▊ | 94/195 [00:47<00:30, 3.32it/s]
[2024-06-02 06:18:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
48%|████▊ | 94/195 [00:47<00:30, 3.32it/s]
49%|████▉ | 96/195 [00:48<00:28, 3.44it/s]
[2024-06-02 06:18:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
49%|████▉ | 96/195 [00:48<00:28, 3.44it/s]
50%|████▉ | 97/195 [00:49<00:46, 2.12it/s]
[2024-06-02 06:18:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
50%|████▉ | 97/195 [00:49<00:46, 2.12it/s]
[2024-06-02 06:18:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
50%|████▉ | 97/195 [00:49<00:46, 2.12it/s]
51%|█████ | 99/195 [00:49<00:32, 2.92it/s]
[2024-06-02 06:18:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
51%|█████ | 99/195 [00:49<00:32, 2.92it/s]
51%|█████▏ | 100/195 [00:49<00:28, 3.30it/s]
[2024-06-02 06:18:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
51%|█████▏ | 100/195 [00:49<00:28, 3.30it/s]
[2024-06-02 06:18:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
51%|█████▏ | 100/195 [00:50<00:28, 3.30it/s]
52%|█████▏ | 102/195 [00:50<00:27, 3.44it/s]
[2024-06-02 06:18:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
52%|█████▏ | 102/195 [00:50<00:27, 3.44it/s]
53%|█████▎ | 103/195 [00:51<00:43, 2.13it/s]
[2024-06-02 06:18:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
53%|█████▎ | 103/195 [00:51<00:43, 2.13it/s]
[2024-06-02 06:18:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
53%|█████▎ | 103/195 [00:51<00:43, 2.13it/s]
54%|█████▍ | 105/195 [00:51<00:30, 2.93it/s]
[2024-06-02 06:18:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
54%|█████▍ | 105/195 [00:51<00:30, 2.93it/s]
54%|█████▍ | 106/195 [00:51<00:26, 3.31it/s]
[2024-06-02 06:18:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
54%|█████▍ | 106/195 [00:51<00:26, 3.31it/s]
[2024-06-02 06:18:14] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
54%|█████▍ | 106/195 [00:52<00:26, 3.31it/s]
55%|█████▌ | 108/195 [00:52<00:25, 3.47it/s]
[2024-06-02 06:18:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
55%|█████▌ | 108/195 [00:52<00:25, 3.47it/s]
56%|█████▌ | 109/195 [00:53<00:40, 2.15it/s]
[2024-06-02 06:18:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
56%|█████▌ | 109/195 [00:53<00:40, 2.15it/s]
[2024-06-02 06:18:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
56%|█████▌ | 109/195 [00:53<00:40, 2.15it/s]
57%|█████▋ | 111/195 [00:53<00:28, 2.96it/s]
[2024-06-02 06:18:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
57%|█████▋ | 111/195 [00:53<00:28, 2.96it/s]
57%|█████▋ | 112/195 [00:53<00:24, 3.33it/s]
[2024-06-02 06:18:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
57%|█████▋ | 112/195 [00:53<00:24, 3.33it/s]
[2024-06-02 06:18:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
57%|█████▋ | 112/195 [00:54<00:24, 3.33it/s]
58%|█████▊ | 114/195 [00:54<00:23, 3.49it/s]
[2024-06-02 06:18:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
58%|█████▊ | 114/195 [00:54<00:23, 3.49it/s]
59%|█████▉ | 115/195 [00:55<00:37, 2.16it/s]
[2024-06-02 06:18:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
59%|█████▉ | 115/195 [00:55<00:37, 2.16it/s]
[2024-06-02 06:18:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
59%|█████▉ | 115/195 [00:55<00:37, 2.16it/s]
60%|██████ | 117/195 [00:55<00:26, 2.95it/s]
[2024-06-02 06:18:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
60%|██████ | 117/195 [00:55<00:26, 2.95it/s]
61%|██████ | 118/195 [00:56<00:23, 3.34it/s]
[2024-06-02 06:18:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
61%|██████ | 118/195 [00:56<00:23, 3.34it/s]
[2024-06-02 06:18:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
61%|██████ | 118/195 [00:56<00:23, 3.34it/s]
62%|██████▏ | 120/195 [00:56<00:21, 3.49it/s]
[2024-06-02 06:18:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
62%|██████▏ | 120/195 [00:57<00:21, 3.49it/s]
62%|██████▏ | 121/195 [00:57<00:34, 2.16it/s]
[2024-06-02 06:18:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
62%|██████▏ | 121/195 [00:57<00:34, 2.16it/s]
[2024-06-02 06:18:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
62%|██████▏ | 121/195 [00:57<00:34, 2.16it/s]
63%|██████▎ | 123/195 [00:57<00:24, 2.97it/s]
[2024-06-02 06:18:20] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
63%|██████▎ | 123/195 [00:58<00:24, 2.97it/s]
64%|██████▎ | 124/195 [00:58<00:21, 3.34it/s]
[2024-06-02 06:18:20] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Mistral-7B-Instruct-v0.3/model-00001-of-00003.safetensors |
|
64%|██████▎ | 124/195 [00:58<00:21, 3.34it/s]
[2024-06-02 06:18:20] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Mistral-7B-Instruct-v0.3/model-00002-of-00003.safetensors |
|
64%|██████▎ | 124/195 [00:58<00:21, 3.34it/s]
[2024-06-02 06:18:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
64%|██████▎ | 124/195 [01:03<00:21, 3.34it/s]
64%|██████▍ | 125/195 [01:03<01:43, 1.48s/it]
[2024-06-02 06:18:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
64%|██████▍ | 125/195 [01:03<01:43, 1.48s/it]
65%|██████▍ | 126/195 [01:04<01:25, 1.24s/it]
[2024-06-02 06:18:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
65%|██████▍ | 126/195 [01:04<01:25, 1.24s/it]
[2024-06-02 06:18:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
65%|██████▍ | 126/195 [01:04<01:25, 1.24s/it]
[2024-06-02 06:18:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
65%|██████▍ | 126/195 [01:04<01:25, 1.24s/it]
66%|██████▌ | 129/195 [01:04<00:45, 1.44it/s]
[2024-06-02 06:18:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
66%|██████▌ | 129/195 [01:05<00:45, 1.44it/s]
67%|██████▋ | 130/195 [01:05<00:50, 1.28it/s]
[2024-06-02 06:18:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
67%|██████▋ | 130/195 [01:05<00:50, 1.28it/s]
[2024-06-02 06:18:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
67%|██████▋ | 130/195 [01:05<00:50, 1.28it/s]
68%|██████▊ | 132/195 [01:05<00:34, 1.83it/s]
[2024-06-02 06:18:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
68%|██████▊ | 132/195 [01:06<00:34, 1.83it/s]
68%|██████▊ | 133/195 [01:06<00:29, 2.14it/s]
[2024-06-02 06:18:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
68%|██████▊ | 133/195 [01:06<00:29, 2.14it/s]
[2024-06-02 06:18:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
68%|██████▊ | 133/195 [01:06<00:29, 2.14it/s]
69%|██████▉ | 135/195 [01:06<00:23, 2.54it/s]
[2024-06-02 06:18:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
69%|██████▉ | 135/195 [01:07<00:23, 2.54it/s]
70%|██████▉ | 136/195 [01:07<00:32, 1.83it/s]
[2024-06-02 06:18:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
70%|██████▉ | 136/195 [01:07<00:32, 1.83it/s]
[2024-06-02 06:18:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
70%|██████▉ | 136/195 [01:07<00:32, 1.83it/s]
71%|███████ | 138/195 [01:08<00:22, 2.55it/s]
[2024-06-02 06:18:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
71%|███████ | 138/195 [01:08<00:22, 2.55it/s]
71%|███████▏ | 139/195 [01:08<00:19, 2.92it/s]
[2024-06-02 06:18:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
71%|███████▏ | 139/195 [01:08<00:19, 2.92it/s]
[2024-06-02 06:18:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
71%|███████▏ | 139/195 [01:08<00:19, 2.92it/s]
72%|███████▏ | 141/195 [01:08<00:16, 3.18it/s]
[2024-06-02 06:18:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
72%|███████▏ | 141/195 [01:09<00:16, 3.18it/s]
73%|███████▎ | 142/195 [01:09<00:25, 2.07it/s]
[2024-06-02 06:18:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
73%|███████▎ | 142/195 [01:09<00:25, 2.07it/s]
[2024-06-02 06:18:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
73%|███████▎ | 142/195 [01:09<00:25, 2.07it/s]
74%|███████▍ | 144/195 [01:10<00:17, 2.85it/s]
[2024-06-02 06:18:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
74%|███████▍ | 144/195 [01:10<00:17, 2.85it/s]
74%|███████▍ | 145/195 [01:10<00:15, 3.23it/s]
[2024-06-02 06:18:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
74%|███████▍ | 145/195 [01:10<00:15, 3.23it/s]
[2024-06-02 06:18:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
74%|███████▍ | 145/195 [01:10<00:15, 3.23it/s]
75%|███████▌ | 147/195 [01:10<00:14, 3.41it/s]
[2024-06-02 06:18:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
75%|███████▌ | 147/195 [01:11<00:14, 3.41it/s]
76%|███████▌ | 148/195 [01:11<00:21, 2.15it/s]
[2024-06-02 06:18:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
76%|███████▌ | 148/195 [01:11<00:21, 2.15it/s]
[2024-06-02 06:18:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
76%|███████▌ | 148/195 [01:12<00:21, 2.15it/s]
77%|███████▋ | 150/195 [01:12<00:15, 2.97it/s]
[2024-06-02 06:18:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
77%|███████▋ | 150/195 [01:12<00:15, 2.97it/s]
77%|███████▋ | 151/195 [01:12<00:13, 3.36it/s]
[2024-06-02 06:18:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
77%|███████▋ | 151/195 [01:12<00:13, 3.36it/s]
[2024-06-02 06:18:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
77%|███████▋ | 151/195 [01:12<00:13, 3.36it/s]
78%|███████▊ | 153/195 [01:12<00:11, 3.54it/s]
[2024-06-02 06:18:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
78%|███████▊ | 153/195 [01:13<00:11, 3.54it/s]
79%|███████▉ | 154/195 [01:13<00:18, 2.18it/s]
[2024-06-02 06:18:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
79%|███████▉ | 154/195 [01:13<00:18, 2.18it/s]
[2024-06-02 06:18:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
79%|███████▉ | 154/195 [01:14<00:18, 2.18it/s]
80%|████████ | 156/195 [01:14<00:13, 2.99it/s]
[2024-06-02 06:18:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
80%|████████ | 156/195 [01:14<00:13, 2.99it/s]
81%|████████ | 157/195 [01:14<00:11, 3.34it/s]
[2024-06-02 06:18:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
81%|████████ | 157/195 [01:14<00:11, 3.34it/s]
[2024-06-02 06:18:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
81%|████████ | 157/195 [01:14<00:11, 3.34it/s]
82%|████████▏ | 159/195 [01:14<00:10, 3.53it/s]
[2024-06-02 06:18:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
82%|████████▏ | 159/195 [01:15<00:10, 3.53it/s]
82%|████████▏ | 160/195 [01:16<00:19, 1.76it/s]
[2024-06-02 06:18:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
82%|████████▏ | 160/195 [01:16<00:19, 1.76it/s]
[2024-06-02 06:18:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
82%|████████▏ | 160/195 [01:16<00:19, 1.76it/s]
83%|████████▎ | 162/195 [01:16<00:13, 2.48it/s]
[2024-06-02 06:18:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
83%|████████▎ | 162/195 [01:16<00:13, 2.48it/s]
84%|████████▎ | 163/195 [01:16<00:11, 2.84it/s]
[2024-06-02 06:18:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
84%|████████▎ | 163/195 [01:16<00:11, 2.84it/s]
[2024-06-02 06:18:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
84%|████████▎ | 163/195 [01:17<00:11, 2.84it/s]
85%|████████▍ | 165/195 [01:17<00:09, 3.15it/s]
[2024-06-02 06:18:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
85%|████████▍ | 165/195 [01:18<00:09, 3.15it/s]
85%|████████▌ | 166/195 [01:19<00:20, 1.43it/s]
[2024-06-02 06:18:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
85%|████████▌ | 166/195 [01:19<00:20, 1.43it/s]
[2024-06-02 06:18:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
85%|████████▌ | 166/195 [01:19<00:20, 1.43it/s]
86%|████████▌ | 168/195 [01:19<00:13, 2.07it/s]
[2024-06-02 06:18:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
86%|████████▌ | 168/195 [01:19<00:13, 2.07it/s]
87%|████████▋ | 169/195 [01:19<00:10, 2.41it/s]
[2024-06-02 06:18:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
87%|████████▋ | 169/195 [01:19<00:10, 2.41it/s]
[2024-06-02 06:18:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
87%|████████▋ | 169/195 [01:20<00:10, 2.41it/s]
88%|████████▊ | 171/195 [01:20<00:08, 2.80it/s]
[2024-06-02 06:18:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
88%|████████▊ | 171/195 [01:22<00:08, 2.80it/s]
88%|████████▊ | 172/195 [01:22<00:17, 1.28it/s]
[2024-06-02 06:18:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
88%|████████▊ | 172/195 [01:22<00:17, 1.28it/s]
[2024-06-02 06:18:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
88%|████████▊ | 172/195 [01:22<00:17, 1.28it/s]
89%|████████▉ | 174/195 [01:23<00:11, 1.87it/s]
[2024-06-02 06:18:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
89%|████████▉ | 174/195 [01:23<00:11, 1.87it/s]
90%|████████▉ | 175/195 [01:23<00:09, 2.19it/s]
[2024-06-02 06:18:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
90%|████████▉ | 175/195 [01:23<00:09, 2.19it/s]
[2024-06-02 06:18:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
90%|████████▉ | 175/195 [01:23<00:09, 2.19it/s]
91%|█████████ | 177/195 [01:23<00:06, 2.59it/s]
[2024-06-02 06:18:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
91%|█████████ | 177/195 [01:25<00:06, 2.59it/s]
91%|█████████▏| 178/195 [01:25<00:11, 1.42it/s]
[2024-06-02 06:18:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
91%|█████████▏| 178/195 [01:25<00:11, 1.42it/s]
[2024-06-02 06:18:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
91%|█████████▏| 178/195 [01:25<00:11, 1.42it/s]
92%|█████████▏| 180/195 [01:25<00:07, 2.04it/s]
[2024-06-02 06:18:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
92%|█████████▏| 180/195 [01:26<00:07, 2.04it/s]
93%|█████████▎| 181/195 [01:26<00:05, 2.37it/s]
[2024-06-02 06:18:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
93%|█████████▎| 181/195 [01:26<00:05, 2.37it/s]
[2024-06-02 06:18:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
93%|█████████▎| 181/195 [01:26<00:05, 2.37it/s]
94%|█████████▍| 183/195 [01:26<00:04, 2.72it/s]
[2024-06-02 06:18:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
94%|█████████▍| 183/195 [01:28<00:04, 2.72it/s]
94%|█████████▍| 184/195 [01:28<00:07, 1.38it/s]
[2024-06-02 06:18:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
94%|█████████▍| 184/195 [01:28<00:07, 1.38it/s]
[2024-06-02 06:18:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
94%|█████████▍| 184/195 [01:28<00:07, 1.38it/s]
95%|█████████▌| 186/195 [01:29<00:04, 1.99it/s]
[2024-06-02 06:18:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
95%|█████████▌| 186/195 [01:29<00:04, 1.99it/s]
96%|█████████▌| 187/195 [01:29<00:03, 2.32it/s]
[2024-06-02 06:18:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
96%|█████████▌| 187/195 [01:29<00:03, 2.32it/s]
[2024-06-02 06:18:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
96%|█████████▌| 187/195 [01:29<00:03, 2.32it/s]
97%|█████████▋| 189/195 [01:29<00:02, 2.64it/s]
[2024-06-02 06:18:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
97%|█████████▋| 189/195 [01:30<00:02, 2.64it/s]
97%|█████████▋| 190/195 [01:31<00:02, 1.70it/s]
[2024-06-02 06:18:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
97%|█████████▋| 190/195 [01:31<00:02, 1.70it/s]
[2024-06-02 06:18:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
97%|█████████▋| 190/195 [01:31<00:02, 1.70it/s]
98%|█████████▊| 192/195 [01:31<00:01, 2.39it/s]
[2024-06-02 06:18:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
98%|█████████▊| 192/195 [01:31<00:01, 2.39it/s]
99%|█████████▉| 193/195 [01:31<00:00, 2.75it/s]
[2024-06-02 06:18:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
99%|█████████▉| 193/195 [01:31<00:00, 2.75it/s]
99%|█████████▉| 194/195 [01:31<00:00, 2.97it/s]
[2024-06-02 06:18:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
99%|█████████▉| 194/195 [01:31<00:00, 2.97it/s]
100%|██████████| 195/195 [01:31<00:00, 3.39it/s]
100%|██████████| 195/195 [01:31<00:00, 2.12it/s] |
|
[2024-06-02 06:18:54] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Mistral-7B-Instruct-v0.3/model-00002-of-00003.safetensors |
|
[2024-06-02 06:18:54] INFO stats.py:77: [92mTime usage[0m: HF loading: 16.591 sec; Pre-quantization mapping: 34.492 sec; Quantization: 0.000 sec |
|
[2024-06-02 06:18:54] INFO stats.py:91: [92mRAM usage[0m: Peak RAM: 9.313 GB. Total bytes loaded from disk: 27.001 GB |
|
[2024-06-02 06:18:54] INFO convert_weight.py:155: [92mParameter size[0m after quantization: 13.500 GB |
|
[2024-06-02 06:18:54] INFO convert_weight.py:160: [92mTotal parameters[0m: 7,248,023,552 |
|
[2024-06-02 06:18:54] INFO convert_weight.py:161: [92mBits per parameter[0m: 16.000 |
|
[2024-06-02 06:18:54] INFO convert_weight.py:166: Saved to directory: [1m/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC[0m |
|
|
|
All finished, 131 total shards committed, record saved to /models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC/ndarray-cache.json |
|
/home/rickzhou/miniconda3/envs/mlc/bin/python -m mlc_llm gen_config /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3 --quantization q0f16 --conv-template mistral_default --output /ssd2/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC |
|
[2024-06-02 04:09:14] INFO auto_config.py:115: [92mFound[0m model configuration: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/config.json |
|
[2024-06-02 04:09:14] INFO auto_config.py:153: [92mFound[0m model type: [1mmistral[0m. Use `--model-type` to override. |
|
[2024-06-02 04:09:14] INFO mistral_model.py:56: [1mprefill_chunk_size[0m defaults to 2048 |
|
[2024-06-02 04:09:14] INFO config.py:106: Overriding [1mmax_batch_size[0m from 1 to 80 |
|
[2024-06-02 04:09:14] INFO gen_config.py:142: [generation_config.json] Setting [1mbos_token_id[0m: 1 |
|
[2024-06-02 04:09:14] INFO gen_config.py:142: [generation_config.json] Setting [1meos_token_id[0m: 2 |
|
[2024-06-02 04:09:14] INFO gen_config.py:154: [92mFound[0m tokenizer config: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/tokenizer.model. Copying to [1m/ssd2/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC/tokenizer.model[0m |
|
[2024-06-02 04:09:14] INFO gen_config.py:154: [92mFound[0m tokenizer config: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/tokenizer.json. Copying to [1m/ssd2/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC/tokenizer.json[0m |
|
[2024-06-02 04:09:14] INFO gen_config.py:156: [91mNot found[0m tokenizer config: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/vocab.json |
|
[2024-06-02 04:09:14] INFO gen_config.py:156: [91mNot found[0m tokenizer config: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/merges.txt |
|
[2024-06-02 04:09:14] INFO gen_config.py:156: [91mNot found[0m tokenizer config: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/added_tokens.json |
|
[2024-06-02 04:09:14] INFO gen_config.py:154: [92mFound[0m tokenizer config: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/tokenizer_config.json. Copying to [1m/ssd2/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC/tokenizer_config.json[0m |
|
[2024-06-02 04:09:14] INFO gen_config.py:215: Detected tokenizer info: {'token_postproc_method': 'byte_fallback', 'prepend_space_in_encode': False, 'strip_space_in_decode': True} |
|
[2024-06-02 04:09:14] INFO gen_config.py:31: [System default] Setting [1mpad_token_id[0m: 0 |
|
[2024-06-02 04:09:14] INFO gen_config.py:31: [System default] Setting [1mtemperature[0m: 1.0 |
|
[2024-06-02 04:09:14] INFO gen_config.py:31: [System default] Setting [1mpresence_penalty[0m: 0.0 |
|
[2024-06-02 04:09:14] INFO gen_config.py:31: [System default] Setting [1mfrequency_penalty[0m: 0.0 |
|
[2024-06-02 04:09:14] INFO gen_config.py:31: [System default] Setting [1mrepetition_penalty[0m: 1.0 |
|
[2024-06-02 04:09:14] INFO gen_config.py:31: [System default] Setting [1mtop_p[0m: 1.0 |
|
[2024-06-02 04:09:14] INFO gen_config.py:31: [System default] Setting [1mmean_gen_len[0m: 128 |
|
[2024-06-02 04:09:14] INFO gen_config.py:31: [System default] Setting [1mmax_gen_len[0m: 512 |
|
[2024-06-02 04:09:14] INFO gen_config.py:31: [System default] Setting [1mshift_fill_factor[0m: 0.3 |
|
[2024-06-02 04:09:14] INFO gen_config.py:222: Dumping configuration file to: [1m/ssd2/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC/mlc-chat-config.json[0m |
|
/home/rickzhou/miniconda3/envs/mlc/bin/python -m mlc_llm convert_weight /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3 --quantization q0f16 --output /ssd2/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC |
|
[2024-06-02 04:09:15] INFO auto_config.py:115: [92mFound[0m model configuration: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/config.json |
|
[2024-06-02 04:09:16] INFO auto_device.py:79: [92mFound[0m device: cuda:0 |
|
[2024-06-02 04:09:16] INFO auto_device.py:79: [92mFound[0m device: cuda:1 |
|
[2024-06-02 04:09:17] INFO auto_device.py:88: [91mNot found[0m device: rocm:0 |
|
[2024-06-02 04:09:18] INFO auto_device.py:88: [91mNot found[0m device: metal:0 |
|
[2024-06-02 04:09:19] INFO auto_device.py:79: [92mFound[0m device: vulkan:0 |
|
[2024-06-02 04:09:19] INFO auto_device.py:79: [92mFound[0m device: vulkan:1 |
|
[2024-06-02 04:09:19] INFO auto_device.py:79: [92mFound[0m device: vulkan:2 |
|
[2024-06-02 04:09:20] INFO auto_device.py:88: [91mNot found[0m device: opencl:0 |
|
[2024-06-02 04:09:20] INFO auto_device.py:35: Using device: [1mcuda:0[0m |
|
[2024-06-02 04:09:20] INFO auto_weight.py:70: Finding weights in: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3 |
|
[2024-06-02 04:09:20] INFO auto_weight.py:136: [91mNot found[0m Huggingface PyTorch |
|
[2024-06-02 04:09:20] INFO auto_weight.py:143: [92mFound[0m source weight format: huggingface-safetensor. Source configuration: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/model.safetensors.index.json |
|
[2024-06-02 04:09:20] INFO auto_weight.py:106: Using source weight configuration: [1m/ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/model.safetensors.index.json[0m. Use `--source` to override. |
|
[2024-06-02 04:09:20] INFO auto_weight.py:110: Using source weight format: [1mhuggingface-safetensor[0m. Use `--source-format` to override. |
|
[2024-06-02 04:09:20] INFO auto_config.py:153: [92mFound[0m model type: [1mmistral[0m. Use `--model-type` to override. |
|
[2024-06-02 04:09:20] INFO mistral_model.py:56: [1mprefill_chunk_size[0m defaults to 2048 |
|
[1mWeight conversion with arguments:[0m |
|
[1m--config[0m /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/config.json |
|
[1m--quantization[0m NoQuantize(name='q0f16', kind='no-quant', model_dtype='float16') |
|
[1m--model-type[0m mistral |
|
[1m--device[0m cuda:0 |
|
[1m--source[0m /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/model.safetensors.index.json |
|
[1m--source-format[0m huggingface-safetensor |
|
[1m--output[0m /ssd2/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC |
|
Start storing to cache /ssd2/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC |
|
0%| | 0/195 [00:00<?, ?it/s]
[2024-06-02 04:09:22] INFO huggingface_loader.py:184: Loading HF parameters from: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/model-00003-of-00003.safetensors |
|
0%| | 0/195 [00:00<?, ?it/s]
[2024-06-02 04:09:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mlm_head.weight[0m", shape: (32768, 4096), dtype: float16 |
|
0%| | 0/195 [00:02<?, ?it/s]
1%| | 1/195 [00:03<10:34, 3.27s/it]
[2024-06-02 04:09:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.22.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
1%| | 1/195 [00:03<10:34, 3.27s/it]
[2024-06-02 04:09:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.22.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
1%| | 1/195 [00:03<10:34, 3.27s/it]
2%|▏ | 3/195 [00:04<03:37, 1.13s/it]
[2024-06-02 04:09:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.22.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
2%|▏ | 3/195 [00:04<03:37, 1.13s/it]
2%|▏ | 4/195 [00:05<04:04, 1.28s/it]
[2024-06-02 04:09:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.22.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
2%|▏ | 4/195 [00:05<04:04, 1.28s/it]
[2024-06-02 04:09:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.23.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
2%|▏ | 4/195 [00:05<04:04, 1.28s/it]
[2024-06-02 04:09:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.23.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
2%|▏ | 4/195 [00:05<04:04, 1.28s/it]
4%|▎ | 7/195 [00:06<02:05, 1.50it/s]
[2024-06-02 04:09:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.23.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
4%|▎ | 7/195 [00:06<02:05, 1.50it/s]
4%|▍ | 8/195 [00:07<02:41, 1.16it/s]
[2024-06-02 04:09:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.23.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
4%|▍ | 8/195 [00:07<02:41, 1.16it/s]
[2024-06-02 04:09:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.23.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
4%|▍ | 8/195 [00:08<02:41, 1.16it/s]
5%|▌ | 10/195 [00:08<01:50, 1.68it/s]
[2024-06-02 04:09:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.23.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
5%|▌ | 10/195 [00:08<01:50, 1.68it/s]
6%|▌ | 11/195 [00:08<01:34, 1.94it/s]
[2024-06-02 04:09:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.24.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
6%|▌ | 11/195 [00:08<01:34, 1.94it/s]
[2024-06-02 04:09:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.24.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
6%|▌ | 11/195 [00:08<01:34, 1.94it/s]
7%|▋ | 13/195 [00:09<01:23, 2.17it/s]
[2024-06-02 04:09:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.24.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
7%|▋ | 13/195 [00:09<01:23, 2.17it/s]
7%|▋ | 14/195 [00:10<02:05, 1.44it/s]
[2024-06-02 04:09:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.24.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
7%|▋ | 14/195 [00:10<02:05, 1.44it/s]
[2024-06-02 04:09:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.24.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
7%|▋ | 14/195 [00:10<02:05, 1.44it/s]
8%|▊ | 16/195 [00:11<01:28, 2.02it/s]
[2024-06-02 04:09:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.24.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
8%|▊ | 16/195 [00:11<01:28, 2.02it/s]
9%|▊ | 17/195 [00:11<01:17, 2.29it/s]
[2024-06-02 04:09:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.25.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
9%|▊ | 17/195 [00:11<01:17, 2.29it/s]
[2024-06-02 04:09:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.25.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
9%|▊ | 17/195 [00:11<01:17, 2.29it/s]
10%|▉ | 19/195 [00:12<01:12, 2.42it/s]
[2024-06-02 04:09:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.25.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
10%|▉ | 19/195 [00:12<01:12, 2.42it/s]
10%|█ | 20/195 [00:13<01:54, 1.53it/s]
[2024-06-02 04:09:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.25.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
10%|█ | 20/195 [00:13<01:54, 1.53it/s]
[2024-06-02 04:09:36] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.25.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
10%|█ | 20/195 [00:13<01:54, 1.53it/s]
11%|█▏ | 22/195 [00:13<01:21, 2.13it/s]
[2024-06-02 04:09:36] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.25.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
11%|█▏ | 22/195 [00:14<01:21, 2.13it/s]
12%|█▏ | 23/195 [00:14<01:11, 2.40it/s]
[2024-06-02 04:09:36] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.26.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
12%|█▏ | 23/195 [00:14<01:11, 2.40it/s]
[2024-06-02 04:09:36] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.26.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
12%|█▏ | 23/195 [00:14<01:11, 2.40it/s]
13%|█▎ | 25/195 [00:14<01:07, 2.50it/s]
[2024-06-02 04:09:37] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.26.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
13%|█▎ | 25/195 [00:15<01:07, 2.50it/s]
13%|█▎ | 26/195 [00:16<01:48, 1.56it/s]
[2024-06-02 04:09:38] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.26.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
13%|█▎ | 26/195 [00:16<01:48, 1.56it/s]
[2024-06-02 04:09:38] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.26.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
13%|█▎ | 26/195 [00:16<01:48, 1.56it/s]
14%|█▍ | 28/195 [00:16<01:17, 2.15it/s]
[2024-06-02 04:09:39] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.26.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
14%|█▍ | 28/195 [00:16<01:17, 2.15it/s]
15%|█▍ | 29/195 [00:17<01:08, 2.42it/s]
[2024-06-02 04:09:39] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.27.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
15%|█▍ | 29/195 [00:17<01:08, 2.42it/s]
[2024-06-02 04:09:39] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.27.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
15%|█▍ | 29/195 [00:17<01:08, 2.42it/s]
16%|█▌ | 31/195 [00:17<01:05, 2.51it/s]
[2024-06-02 04:09:40] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.27.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
16%|█▌ | 31/195 [00:18<01:05, 2.51it/s]
16%|█▋ | 32/195 [00:19<01:44, 1.56it/s]
[2024-06-02 04:09:41] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.27.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
16%|█▋ | 32/195 [00:19<01:44, 1.56it/s]
[2024-06-02 04:09:41] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.27.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
16%|█▋ | 32/195 [00:19<01:44, 1.56it/s]
17%|█▋ | 34/195 [00:19<01:14, 2.16it/s]
[2024-06-02 04:09:42] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.27.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
17%|█▋ | 34/195 [00:19<01:14, 2.16it/s]
18%|█▊ | 35/195 [00:19<01:05, 2.43it/s]
[2024-06-02 04:09:42] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.28.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
18%|█▊ | 35/195 [00:19<01:05, 2.43it/s]
[2024-06-02 04:09:42] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.28.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
18%|█▊ | 35/195 [00:20<01:05, 2.43it/s]
19%|█▉ | 37/195 [00:20<01:02, 2.53it/s]
[2024-06-02 04:09:43] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.28.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
19%|█▉ | 37/195 [00:21<01:02, 2.53it/s]
19%|█▉ | 38/195 [00:22<01:39, 1.58it/s]
[2024-06-02 04:09:44] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.28.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
19%|█▉ | 38/195 [00:22<01:39, 1.58it/s]
[2024-06-02 04:09:44] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.28.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
19%|█▉ | 38/195 [00:22<01:39, 1.58it/s]
21%|██ | 40/195 [00:22<01:10, 2.19it/s]
[2024-06-02 04:09:44] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.28.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
21%|██ | 40/195 [00:22<01:10, 2.19it/s]
21%|██ | 41/195 [00:22<01:02, 2.46it/s]
[2024-06-02 04:09:44] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.29.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
21%|██ | 41/195 [00:22<01:02, 2.46it/s]
[2024-06-02 04:09:45] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.29.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
21%|██ | 41/195 [00:22<01:02, 2.46it/s]
22%|██▏ | 43/195 [00:23<01:00, 2.53it/s]
[2024-06-02 04:09:46] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.29.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
22%|██▏ | 43/195 [00:23<01:00, 2.53it/s]
23%|██▎ | 44/195 [00:24<01:36, 1.57it/s]
[2024-06-02 04:09:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.29.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
23%|██▎ | 44/195 [00:24<01:36, 1.57it/s]
[2024-06-02 04:09:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.29.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
23%|██▎ | 44/195 [00:25<01:36, 1.57it/s]
24%|██▎ | 46/195 [00:25<01:09, 2.15it/s]
[2024-06-02 04:09:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.29.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
24%|██▎ | 46/195 [00:25<01:09, 2.15it/s]
24%|██▍ | 47/195 [00:25<01:01, 2.42it/s]
[2024-06-02 04:09:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.30.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
24%|██▍ | 47/195 [00:25<01:01, 2.42it/s]
[2024-06-02 04:09:48] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.30.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
24%|██▍ | 47/195 [00:25<01:01, 2.42it/s]
25%|██▌ | 49/195 [00:26<00:58, 2.51it/s]
[2024-06-02 04:09:49] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.30.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
25%|██▌ | 49/195 [00:26<00:58, 2.51it/s]
26%|██▌ | 50/195 [00:27<01:32, 1.56it/s]
[2024-06-02 04:09:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.30.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
26%|██▌ | 50/195 [00:27<01:32, 1.56it/s]
[2024-06-02 04:09:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.30.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
26%|██▌ | 50/195 [00:27<01:32, 1.56it/s]
27%|██▋ | 52/195 [00:28<01:06, 2.16it/s]
[2024-06-02 04:09:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.30.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
27%|██▋ | 52/195 [00:28<01:06, 2.16it/s]
27%|██▋ | 53/195 [00:28<00:58, 2.42it/s]
[2024-06-02 04:09:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.31.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
27%|██▋ | 53/195 [00:28<00:58, 2.42it/s]
[2024-06-02 04:09:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.31.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
27%|██▋ | 53/195 [00:28<00:58, 2.42it/s]
28%|██▊ | 55/195 [00:29<00:55, 2.53it/s]
[2024-06-02 04:09:51] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.31.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
28%|██▊ | 55/195 [00:29<00:55, 2.53it/s]
29%|██▊ | 56/195 [00:30<01:29, 1.56it/s]
[2024-06-02 04:09:52] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.31.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
29%|██▊ | 56/195 [00:30<01:29, 1.56it/s]
[2024-06-02 04:09:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.31.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
29%|██▊ | 56/195 [00:30<01:29, 1.56it/s]
30%|██▉ | 58/195 [00:31<01:03, 2.16it/s]
[2024-06-02 04:09:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.31.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
30%|██▉ | 58/195 [00:31<01:03, 2.16it/s]
30%|███ | 59/195 [00:31<00:56, 2.43it/s]
[2024-06-02 04:09:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.norm.weight[0m", shape: (4096,), dtype: float16 |
|
30%|███ | 59/195 [00:31<00:56, 2.43it/s]
[2024-06-02 04:09:53] INFO huggingface_loader.py:196: Unloading HF weight file: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/model-00003-of-00003.safetensors |
|
30%|███ | 59/195 [00:31<00:56, 2.43it/s]
[2024-06-02 04:09:53] INFO huggingface_loader.py:184: Loading HF parameters from: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/model-00001-of-00003.safetensors |
|
30%|███ | 59/195 [00:31<00:56, 2.43it/s]
[2024-06-02 04:09:55] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.embed_tokens.weight[0m", shape: (32768, 4096), dtype: float16 |
|
30%|███ | 59/195 [00:32<00:56, 2.43it/s]
31%|███▏ | 61/195 [00:34<01:45, 1.27it/s]
[2024-06-02 04:09:56] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.0.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
31%|███▏ | 61/195 [00:34<01:45, 1.27it/s]
[2024-06-02 04:09:56] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.0.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
31%|███▏ | 61/195 [00:34<01:45, 1.27it/s]
32%|███▏ | 63/195 [00:34<01:24, 1.55it/s]
[2024-06-02 04:09:57] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.0.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
32%|███▏ | 63/195 [00:35<01:24, 1.55it/s]
33%|███▎ | 64/195 [00:36<01:47, 1.22it/s]
[2024-06-02 04:09:58] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.0.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
33%|███▎ | 64/195 [00:36<01:47, 1.22it/s]
[2024-06-02 04:09:58] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.0.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
33%|███▎ | 64/195 [00:36<01:47, 1.22it/s]
34%|███▍ | 66/195 [00:36<01:19, 1.63it/s]
[2024-06-02 04:09:59] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.0.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
34%|███▍ | 66/195 [00:36<01:19, 1.63it/s]
34%|███▍ | 67/195 [00:37<01:07, 1.88it/s]
[2024-06-02 04:09:59] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.1.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
34%|███▍ | 67/195 [00:37<01:07, 1.88it/s]
[2024-06-02 04:09:59] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.1.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
34%|███▍ | 67/195 [00:37<01:07, 1.88it/s]
35%|███▌ | 69/195 [00:37<00:59, 2.12it/s]
[2024-06-02 04:10:00] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.1.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
35%|███▌ | 69/195 [00:38<00:59, 2.12it/s]
36%|███▌ | 70/195 [00:39<01:26, 1.44it/s]
[2024-06-02 04:10:01] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.1.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
36%|███▌ | 70/195 [00:39<01:26, 1.44it/s]
[2024-06-02 04:10:01] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.1.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
36%|███▌ | 70/195 [00:39<01:26, 1.44it/s]
37%|███▋ | 72/195 [00:39<01:04, 1.89it/s]
[2024-06-02 04:10:02] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.1.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
37%|███▋ | 72/195 [00:39<01:04, 1.89it/s]
37%|███▋ | 73/195 [00:40<00:56, 2.17it/s]
[2024-06-02 04:10:02] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.10.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
37%|███▋ | 73/195 [00:40<00:56, 2.17it/s]
38%|███▊ | 74/195 [00:41<01:25, 1.41it/s]
[2024-06-02 04:10:03] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.10.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
38%|███▊ | 74/195 [00:41<01:25, 1.41it/s]
38%|███▊ | 75/195 [00:41<01:13, 1.63it/s]
[2024-06-02 04:10:04] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.10.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
38%|███▊ | 75/195 [00:41<01:13, 1.63it/s]
39%|███▉ | 76/195 [00:42<01:00, 1.97it/s]
[2024-06-02 04:10:04] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.2.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
39%|███▉ | 76/195 [00:42<01:00, 1.97it/s]
[2024-06-02 04:10:04] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.2.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
39%|███▉ | 76/195 [00:42<01:00, 1.97it/s]
40%|████ | 78/195 [00:42<00:52, 2.23it/s]
[2024-06-02 04:10:05] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.2.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
40%|████ | 78/195 [00:43<00:52, 2.23it/s]
41%|████ | 79/195 [00:44<01:21, 1.42it/s]
[2024-06-02 04:10:06] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.2.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
41%|████ | 79/195 [00:44<01:21, 1.42it/s]
[2024-06-02 04:10:06] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.2.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
41%|████ | 79/195 [00:44<01:21, 1.42it/s]
42%|████▏ | 81/195 [00:44<00:55, 2.05it/s]
[2024-06-02 04:10:07] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.2.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
42%|████▏ | 81/195 [00:44<00:55, 2.05it/s]
42%|████▏ | 82/195 [00:44<00:48, 2.34it/s]
[2024-06-02 04:10:07] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.3.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
42%|████▏ | 82/195 [00:44<00:48, 2.34it/s]
[2024-06-02 04:10:07] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.3.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
42%|████▏ | 82/195 [00:45<00:48, 2.34it/s]
43%|████▎ | 84/195 [00:45<00:44, 2.47it/s]
[2024-06-02 04:10:08] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.3.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
43%|████▎ | 84/195 [00:46<00:44, 2.47it/s]
44%|████▎ | 85/195 [00:47<01:11, 1.54it/s]
[2024-06-02 04:10:09] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.3.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
44%|████▎ | 85/195 [00:47<01:11, 1.54it/s]
[2024-06-02 04:10:09] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.3.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
44%|████▎ | 85/195 [00:47<01:11, 1.54it/s]
45%|████▍ | 87/195 [00:47<00:49, 2.16it/s]
[2024-06-02 04:10:09] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.3.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
45%|████▍ | 87/195 [00:47<00:49, 2.16it/s]
45%|████▌ | 88/195 [00:47<00:43, 2.45it/s]
[2024-06-02 04:10:10] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.4.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
45%|████▌ | 88/195 [00:47<00:43, 2.45it/s]
[2024-06-02 04:10:10] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.4.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
45%|████▌ | 88/195 [00:47<00:43, 2.45it/s]
46%|████▌ | 90/195 [00:48<00:41, 2.54it/s]
[2024-06-02 04:10:11] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.4.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
46%|████▌ | 90/195 [00:48<00:41, 2.54it/s]
47%|████▋ | 91/195 [00:50<01:06, 1.56it/s]
[2024-06-02 04:10:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.4.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
47%|████▋ | 91/195 [00:50<01:06, 1.56it/s]
[2024-06-02 04:10:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.4.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
47%|████▋ | 91/195 [00:50<01:06, 1.56it/s]
48%|████▊ | 93/195 [00:50<00:47, 2.15it/s]
[2024-06-02 04:10:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.4.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
48%|████▊ | 93/195 [00:50<00:47, 2.15it/s]
48%|████▊ | 94/195 [00:50<00:41, 2.44it/s]
[2024-06-02 04:10:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.5.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
48%|████▊ | 94/195 [00:50<00:41, 2.44it/s]
[2024-06-02 04:10:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.5.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
48%|████▊ | 94/195 [00:50<00:41, 2.44it/s]
49%|████▉ | 96/195 [00:51<00:39, 2.53it/s]
[2024-06-02 04:10:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.5.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
49%|████▉ | 96/195 [00:51<00:39, 2.53it/s]
50%|████▉ | 97/195 [00:52<01:02, 1.57it/s]
[2024-06-02 04:10:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.5.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
50%|████▉ | 97/195 [00:52<01:02, 1.57it/s]
[2024-06-02 04:10:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.5.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
50%|████▉ | 97/195 [00:53<01:02, 1.57it/s]
51%|█████ | 99/195 [00:53<00:43, 2.18it/s]
[2024-06-02 04:10:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.5.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
51%|█████ | 99/195 [00:53<00:43, 2.18it/s]
51%|█████▏ | 100/195 [00:53<00:38, 2.48it/s]
[2024-06-02 04:10:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.6.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
51%|█████▏ | 100/195 [00:53<00:38, 2.48it/s]
[2024-06-02 04:10:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.6.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
51%|█████▏ | 100/195 [00:53<00:38, 2.48it/s]
52%|█████▏ | 102/195 [00:54<00:36, 2.54it/s]
[2024-06-02 04:10:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.6.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
52%|█████▏ | 102/195 [00:54<00:36, 2.54it/s]
53%|█████▎ | 103/195 [00:55<00:58, 1.57it/s]
[2024-06-02 04:10:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.6.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
53%|█████▎ | 103/195 [00:55<00:58, 1.57it/s]
[2024-06-02 04:10:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.6.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
53%|█████▎ | 103/195 [00:55<00:58, 1.57it/s]
54%|█████▍ | 105/195 [00:56<00:41, 2.19it/s]
[2024-06-02 04:10:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.6.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
54%|█████▍ | 105/195 [00:56<00:41, 2.19it/s]
54%|█████▍ | 106/195 [00:56<00:36, 2.47it/s]
[2024-06-02 04:10:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.7.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
54%|█████▍ | 106/195 [00:56<00:36, 2.47it/s]
[2024-06-02 04:10:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.7.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
54%|█████▍ | 106/195 [00:56<00:36, 2.47it/s]
55%|█████▌ | 108/195 [00:56<00:34, 2.55it/s]
[2024-06-02 04:10:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.7.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
55%|█████▌ | 108/195 [00:57<00:34, 2.55it/s]
56%|█████▌ | 109/195 [00:58<00:54, 1.58it/s]
[2024-06-02 04:10:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.7.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
56%|█████▌ | 109/195 [00:58<00:54, 1.58it/s]
[2024-06-02 04:10:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.7.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
56%|█████▌ | 109/195 [00:58<00:54, 1.58it/s]
57%|█████▋ | 111/195 [00:58<00:38, 2.19it/s]
[2024-06-02 04:10:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.7.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
57%|█████▋ | 111/195 [00:58<00:38, 2.19it/s]
57%|█████▋ | 112/195 [00:59<00:33, 2.45it/s]
[2024-06-02 04:10:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.8.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
57%|█████▋ | 112/195 [00:59<00:33, 2.45it/s]
[2024-06-02 04:10:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.8.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
57%|█████▋ | 112/195 [00:59<00:33, 2.45it/s]
58%|█████▊ | 114/195 [00:59<00:31, 2.53it/s]
[2024-06-02 04:10:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.8.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
58%|█████▊ | 114/195 [01:00<00:31, 2.53it/s]
59%|█████▉ | 115/195 [01:01<00:50, 1.57it/s]
[2024-06-02 04:10:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.8.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
59%|█████▉ | 115/195 [01:01<00:50, 1.57it/s]
[2024-06-02 04:10:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.8.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
59%|█████▉ | 115/195 [01:01<00:50, 1.57it/s]
60%|██████ | 117/195 [01:01<00:35, 2.19it/s]
[2024-06-02 04:10:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.8.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
60%|██████ | 117/195 [01:01<00:35, 2.19it/s]
61%|██████ | 118/195 [01:01<00:31, 2.47it/s]
[2024-06-02 04:10:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.9.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
61%|██████ | 118/195 [01:01<00:31, 2.47it/s]
[2024-06-02 04:10:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.9.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
61%|██████ | 118/195 [01:02<00:31, 2.47it/s]
62%|██████▏ | 120/195 [01:02<00:29, 2.55it/s]
[2024-06-02 04:10:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.9.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
62%|██████▏ | 120/195 [01:03<00:29, 2.55it/s]
62%|██████▏ | 121/195 [01:04<00:46, 1.58it/s]
[2024-06-02 04:10:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.9.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
62%|██████▏ | 121/195 [01:04<00:46, 1.58it/s]
[2024-06-02 04:10:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.9.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
62%|██████▏ | 121/195 [01:04<00:46, 1.58it/s]
63%|██████▎ | 123/195 [01:04<00:32, 2.19it/s]
[2024-06-02 04:10:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.9.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
63%|██████▎ | 123/195 [01:04<00:32, 2.19it/s]
64%|██████▎ | 124/195 [01:04<00:28, 2.47it/s]
[2024-06-02 04:10:26] INFO huggingface_loader.py:196: Unloading HF weight file: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/model-00001-of-00003.safetensors |
|
64%|██████▎ | 124/195 [01:04<00:28, 2.47it/s]
[2024-06-02 04:10:27] INFO huggingface_loader.py:184: Loading HF parameters from: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/model-00002-of-00003.safetensors |
|
64%|██████▎ | 124/195 [01:05<00:28, 2.47it/s]
[2024-06-02 04:10:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.10.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
64%|██████▎ | 124/195 [01:05<00:28, 2.47it/s]
64%|██████▍ | 125/195 [01:05<00:37, 1.89it/s]
[2024-06-02 04:10:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.10.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
64%|██████▍ | 125/195 [01:05<00:37, 1.89it/s]
65%|██████▍ | 126/195 [01:06<00:40, 1.72it/s]
[2024-06-02 04:10:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.10.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
65%|██████▍ | 126/195 [01:06<00:40, 1.72it/s]
[2024-06-02 04:10:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.11.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
65%|██████▍ | 126/195 [01:06<00:40, 1.72it/s]
[2024-06-02 04:10:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.11.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
65%|██████▍ | 126/195 [01:06<00:40, 1.72it/s]
66%|██████▌ | 129/195 [01:07<00:26, 2.45it/s]
[2024-06-02 04:10:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.11.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
66%|██████▌ | 129/195 [01:07<00:26, 2.45it/s]
67%|██████▋ | 130/195 [01:08<00:40, 1.59it/s]
[2024-06-02 04:10:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.11.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
67%|██████▋ | 130/195 [01:08<00:40, 1.59it/s]
[2024-06-02 04:10:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.11.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
67%|██████▋ | 130/195 [01:08<00:40, 1.59it/s]
68%|██████▊ | 132/195 [01:08<00:29, 2.16it/s]
[2024-06-02 04:10:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.11.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
68%|██████▊ | 132/195 [01:09<00:29, 2.16it/s]
68%|██████▊ | 133/195 [01:09<00:25, 2.42it/s]
[2024-06-02 04:10:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.12.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
68%|██████▊ | 133/195 [01:09<00:25, 2.42it/s]
[2024-06-02 04:10:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.12.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
68%|██████▊ | 133/195 [01:09<00:25, 2.42it/s]
69%|██████▉ | 135/195 [01:09<00:23, 2.51it/s]
[2024-06-02 04:10:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.12.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
69%|██████▉ | 135/195 [01:10<00:23, 2.51it/s]
70%|██████▉ | 136/195 [01:11<00:37, 1.57it/s]
[2024-06-02 04:10:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.12.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
70%|██████▉ | 136/195 [01:11<00:37, 1.57it/s]
[2024-06-02 04:10:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.12.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
70%|██████▉ | 136/195 [01:11<00:37, 1.57it/s]
71%|███████ | 138/195 [01:11<00:26, 2.18it/s]
[2024-06-02 04:10:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.12.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
71%|███████ | 138/195 [01:11<00:26, 2.18it/s]
71%|███████▏ | 139/195 [01:12<00:22, 2.44it/s]
[2024-06-02 04:10:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.13.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
71%|███████▏ | 139/195 [01:12<00:22, 2.44it/s]
[2024-06-02 04:10:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.13.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
71%|███████▏ | 139/195 [01:12<00:22, 2.44it/s]
72%|███████▏ | 141/195 [01:12<00:21, 2.53it/s]
[2024-06-02 04:10:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.13.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
72%|███████▏ | 141/195 [01:13<00:21, 2.53it/s]
73%|███████▎ | 142/195 [01:14<00:33, 1.57it/s]
[2024-06-02 04:10:36] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.13.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
73%|███████▎ | 142/195 [01:14<00:33, 1.57it/s]
[2024-06-02 04:10:36] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.13.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
73%|███████▎ | 142/195 [01:14<00:33, 1.57it/s]
74%|███████▍ | 144/195 [01:14<00:23, 2.16it/s]
[2024-06-02 04:10:37] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.13.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
74%|███████▍ | 144/195 [01:14<00:23, 2.16it/s]
74%|███████▍ | 145/195 [01:14<00:20, 2.44it/s]
[2024-06-02 04:10:37] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.14.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
74%|███████▍ | 145/195 [01:14<00:20, 2.44it/s]
[2024-06-02 04:10:37] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.14.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
74%|███████▍ | 145/195 [01:15<00:20, 2.44it/s]
75%|███████▌ | 147/195 [01:15<00:18, 2.54it/s]
[2024-06-02 04:10:38] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.14.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
75%|███████▌ | 147/195 [01:16<00:18, 2.54it/s]
76%|███████▌ | 148/195 [01:17<00:29, 1.58it/s]
[2024-06-02 04:10:39] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.14.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
76%|███████▌ | 148/195 [01:17<00:29, 1.58it/s]
[2024-06-02 04:10:39] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.14.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
76%|███████▌ | 148/195 [01:17<00:29, 1.58it/s]
77%|███████▋ | 150/195 [01:17<00:20, 2.17it/s]
[2024-06-02 04:10:39] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.14.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
77%|███████▋ | 150/195 [01:17<00:20, 2.17it/s]
77%|███████▋ | 151/195 [01:17<00:17, 2.46it/s]
[2024-06-02 04:10:39] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.15.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
77%|███████▋ | 151/195 [01:17<00:17, 2.46it/s]
[2024-06-02 04:10:40] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.15.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
77%|███████▋ | 151/195 [01:17<00:17, 2.46it/s]
78%|███████▊ | 153/195 [01:18<00:16, 2.55it/s]
[2024-06-02 04:10:41] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.15.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
78%|███████▊ | 153/195 [01:18<00:16, 2.55it/s]
79%|███████▉ | 154/195 [01:19<00:25, 1.58it/s]
[2024-06-02 04:10:42] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.15.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
79%|███████▉ | 154/195 [01:19<00:25, 1.58it/s]
[2024-06-02 04:10:42] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.15.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
79%|███████▉ | 154/195 [01:20<00:25, 1.58it/s]
80%|████████ | 156/195 [01:20<00:17, 2.17it/s]
[2024-06-02 04:10:42] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.15.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
80%|████████ | 156/195 [01:20<00:17, 2.17it/s]
81%|████████ | 157/195 [01:20<00:15, 2.46it/s]
[2024-06-02 04:10:42] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.16.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
81%|████████ | 157/195 [01:20<00:15, 2.46it/s]
[2024-06-02 04:10:43] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.16.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
81%|████████ | 157/195 [01:20<00:15, 2.46it/s]
82%|████████▏ | 159/195 [01:21<00:14, 2.55it/s]
[2024-06-02 04:10:44] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.16.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
82%|████████▏ | 159/195 [01:21<00:14, 2.55it/s]
82%|████████▏ | 160/195 [01:22<00:22, 1.59it/s]
[2024-06-02 04:10:45] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.16.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
82%|████████▏ | 160/195 [01:22<00:22, 1.59it/s]
[2024-06-02 04:10:45] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.16.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
82%|████████▏ | 160/195 [01:22<00:22, 1.59it/s]
83%|████████▎ | 162/195 [01:23<00:15, 2.20it/s]
[2024-06-02 04:10:45] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.16.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
83%|████████▎ | 162/195 [01:23<00:15, 2.20it/s]
84%|████████▎ | 163/195 [01:23<00:12, 2.47it/s]
[2024-06-02 04:10:45] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.17.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
84%|████████▎ | 163/195 [01:23<00:12, 2.47it/s]
[2024-06-02 04:10:45] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.17.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
84%|████████▎ | 163/195 [01:23<00:12, 2.47it/s]
85%|████████▍ | 165/195 [01:24<00:11, 2.56it/s]
[2024-06-02 04:10:46] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.17.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
85%|████████▍ | 165/195 [01:24<00:11, 2.56it/s]
85%|████████▌ | 166/195 [01:25<00:18, 1.58it/s]
[2024-06-02 04:10:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.17.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
85%|████████▌ | 166/195 [01:25<00:18, 1.58it/s]
[2024-06-02 04:10:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.17.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
85%|████████▌ | 166/195 [01:25<00:18, 1.58it/s]
86%|████████▌ | 168/195 [01:25<00:12, 2.18it/s]
[2024-06-02 04:10:48] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.17.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
86%|████████▌ | 168/195 [01:25<00:12, 2.18it/s]
87%|████████▋ | 169/195 [01:26<00:10, 2.45it/s]
[2024-06-02 04:10:48] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.18.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
87%|████████▋ | 169/195 [01:26<00:10, 2.45it/s]
[2024-06-02 04:10:48] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.18.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
87%|████████▋ | 169/195 [01:26<00:10, 2.45it/s]
88%|████████▊ | 171/195 [01:26<00:09, 2.55it/s]
[2024-06-02 04:10:49] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.18.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
88%|████████▊ | 171/195 [01:27<00:09, 2.55it/s]
88%|████████▊ | 172/195 [01:28<00:14, 1.57it/s]
[2024-06-02 04:10:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.18.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
88%|████████▊ | 172/195 [01:28<00:14, 1.57it/s]
[2024-06-02 04:10:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.18.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
88%|████████▊ | 172/195 [01:28<00:14, 1.57it/s]
89%|████████▉ | 174/195 [01:28<00:09, 2.17it/s]
[2024-06-02 04:10:51] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.18.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
89%|████████▉ | 174/195 [01:28<00:09, 2.17it/s]
90%|████████▉ | 175/195 [01:28<00:08, 2.44it/s]
[2024-06-02 04:10:51] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.19.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
90%|████████▉ | 175/195 [01:28<00:08, 2.44it/s]
[2024-06-02 04:10:51] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.19.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
90%|████████▉ | 175/195 [01:29<00:08, 2.44it/s]
91%|█████████ | 177/195 [01:29<00:07, 2.53it/s]
[2024-06-02 04:10:52] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.19.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
91%|█████████ | 177/195 [01:30<00:07, 2.53it/s]
91%|█████████▏| 178/195 [01:31<00:10, 1.57it/s]
[2024-06-02 04:10:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.19.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
91%|█████████▏| 178/195 [01:31<00:10, 1.57it/s]
[2024-06-02 04:10:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.19.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
91%|█████████▏| 178/195 [01:31<00:10, 1.57it/s]
92%|█████████▏| 180/195 [01:31<00:06, 2.17it/s]
[2024-06-02 04:10:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.19.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
92%|█████████▏| 180/195 [01:31<00:06, 2.17it/s]
93%|█████████▎| 181/195 [01:31<00:05, 2.44it/s]
[2024-06-02 04:10:54] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.20.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
93%|█████████▎| 181/195 [01:31<00:05, 2.44it/s]
[2024-06-02 04:10:54] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.20.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
93%|█████████▎| 181/195 [01:32<00:05, 2.44it/s]
94%|█████████▍| 183/195 [01:32<00:04, 2.52it/s]
[2024-06-02 04:10:55] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.20.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
94%|█████████▍| 183/195 [01:33<00:04, 2.52it/s]
94%|█████████▍| 184/195 [01:34<00:07, 1.57it/s]
[2024-06-02 04:10:56] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.20.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
94%|█████████▍| 184/195 [01:34<00:07, 1.57it/s]
[2024-06-02 04:10:56] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.20.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
94%|█████████▍| 184/195 [01:34<00:07, 1.57it/s]
95%|█████████▌| 186/195 [01:34<00:04, 2.17it/s]
[2024-06-02 04:10:56] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.20.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
95%|█████████▌| 186/195 [01:34<00:04, 2.17it/s]
96%|█████████▌| 187/195 [01:34<00:03, 2.44it/s]
[2024-06-02 04:10:56] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.21.input_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
96%|█████████▌| 187/195 [01:34<00:03, 2.44it/s]
[2024-06-02 04:10:57] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.21.mlp.down_proj.weight[0m", shape: (4096, 14336), dtype: float16 |
|
96%|█████████▌| 187/195 [01:34<00:03, 2.44it/s]
97%|█████████▋| 189/195 [01:35<00:02, 2.52it/s]
[2024-06-02 04:10:58] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.21.mlp.gate_up_proj.weight[0m", shape: (28672, 4096), dtype: float16 |
|
97%|█████████▋| 189/195 [01:35<00:02, 2.52it/s]
97%|█████████▋| 190/195 [01:36<00:03, 1.56it/s]
[2024-06-02 04:10:59] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.21.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float16 |
|
97%|█████████▋| 190/195 [01:36<00:03, 1.56it/s]
[2024-06-02 04:10:59] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.21.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
97%|█████████▋| 190/195 [01:37<00:03, 1.56it/s]
98%|█████████▊| 192/195 [01:37<00:01, 2.16it/s]
[2024-06-02 04:10:59] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.21.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
98%|█████████▊| 192/195 [01:37<00:01, 2.16it/s]
99%|█████████▉| 193/195 [01:37<00:00, 2.42it/s]
[2024-06-02 04:10:59] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.22.self_attn.qkv_proj.weight[0m", shape: (6144, 4096), dtype: float16 |
|
99%|█████████▉| 193/195 [01:37<00:00, 2.42it/s]
99%|█████████▉| 194/195 [01:37<00:00, 2.54it/s]
[2024-06-02 04:11:00] INFO huggingface_loader.py:174: [Not quantized] Parameter: "[1mmodel.layers.22.self_attn.o_proj.weight[0m", shape: (4096, 4096), dtype: float16 |
|
99%|█████████▉| 194/195 [01:37<00:00, 2.54it/s]
100%|██████████| 195/195 [01:38<00:00, 2.85it/s]
100%|██████████| 195/195 [01:38<00:00, 1.99it/s] |
|
[2024-06-02 04:11:00] INFO huggingface_loader.py:196: Unloading HF weight file: /ssd1/rickzhou/models/Mistral-7B-Instruct-v0.3/model-00002-of-00003.safetensors |
|
[2024-06-02 04:11:00] INFO stats.py:76: [92mTime usage[0m: HF loading: 3.723 sec; Pre-quantization mapping: 27.572 sec; Quantization: 0.000 sec |
|
[2024-06-02 04:11:00] INFO stats.py:90: [92mRAM usage[0m: Peak RAM: 9.313 GB. Total bytes loaded from disk: 27.001 GB |
|
[2024-06-02 04:11:00] INFO convert_weight.py:155: [92mParameter size[0m after quantization: 13.500 GB |
|
[2024-06-02 04:11:00] INFO convert_weight.py:160: [92mTotal parameters[0m: 7,248,023,552 |
|
[2024-06-02 04:11:00] INFO convert_weight.py:161: [92mBits per parameter[0m: 16.000 |
|
[2024-06-02 04:11:00] INFO convert_weight.py:166: Saved to directory: [1m/ssd2/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC[0m |
|
|
|
All finished, 131 total shards committed, record saved to /ssd2/models/mlc-delivery/hf/mlc-ai/Mistral-7B-Instruct-v0.3-q0f16-MLC/ndarray-cache.json |
|
|