File size: 13,164 Bytes
ff21671 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
/opt/conda/envs/py310/bin/python -m mlc_llm gen_config /models/Mixtral-8x7B-Instruct-v0.1 --quantization q4f32_1 --conv-template mistral_default --output /models/mlc-delivery/hf/mlc-ai/Mixtral-8x7B-Instruct-v0.1-q4f32_1-MLC [2024-06-06 22:21:44] INFO auto_config.py:116: [92mFound[0m model configuration: /models/Mixtral-8x7B-Instruct-v0.1/config.json [2024-06-06 22:21:44] INFO auto_config.py:154: [92mFound[0m model type: [1mmixtral[0m. Use `--model-type` to override. [2024-06-06 22:21:44] INFO llama_model.py:52: [1mcontext_window_size[0m not found in config.json. Falling back to [1mmax_position_embeddings[0m (32768) [2024-06-06 22:21:44] INFO llama_model.py:72: [1mprefill_chunk_size[0m defaults to 2048 [2024-06-06 22:21:44] INFO config.py:107: Overriding [1mmax_batch_size[0m from 1 to 80 [2024-06-06 22:21:44] INFO gen_config.py:143: [generation_config.json] Setting [1mbos_token_id[0m: 1 [2024-06-06 22:21:44] INFO gen_config.py:143: [generation_config.json] Setting [1meos_token_id[0m: 2 [2024-06-06 22:21:44] INFO gen_config.py:155: [92mFound[0m tokenizer config: /models/Mixtral-8x7B-Instruct-v0.1/tokenizer.model. Copying to [1m/models/mlc-delivery/hf/mlc-ai/Mixtral-8x7B-Instruct-v0.1-q4f32_1-MLC/tokenizer.model[0m [2024-06-06 22:21:44] INFO gen_config.py:155: [92mFound[0m tokenizer config: /models/Mixtral-8x7B-Instruct-v0.1/tokenizer.json. Copying to [1m/models/mlc-delivery/hf/mlc-ai/Mixtral-8x7B-Instruct-v0.1-q4f32_1-MLC/tokenizer.json[0m [2024-06-06 22:21:44] INFO gen_config.py:157: [91mNot found[0m tokenizer config: /models/Mixtral-8x7B-Instruct-v0.1/vocab.json [2024-06-06 22:21:44] INFO gen_config.py:157: [91mNot found[0m tokenizer config: /models/Mixtral-8x7B-Instruct-v0.1/merges.txt [2024-06-06 22:21:44] INFO gen_config.py:157: [91mNot found[0m tokenizer config: /models/Mixtral-8x7B-Instruct-v0.1/added_tokens.json [2024-06-06 22:21:44] INFO gen_config.py:155: [92mFound[0m tokenizer config: /models/Mixtral-8x7B-Instruct-v0.1/tokenizer_config.json. Copying to [1m/models/mlc-delivery/hf/mlc-ai/Mixtral-8x7B-Instruct-v0.1-q4f32_1-MLC/tokenizer_config.json[0m [2024-06-06 22:21:44] INFO gen_config.py:216: Detected tokenizer info: {'token_postproc_method': 'byte_fallback', 'prepend_space_in_encode': True, 'strip_space_in_decode': True} [2024-06-06 22:21:44] INFO gen_config.py:32: [System default] Setting [1mpad_token_id[0m: 0 [2024-06-06 22:21:44] INFO gen_config.py:32: [System default] Setting [1mtemperature[0m: 1.0 [2024-06-06 22:21:44] INFO gen_config.py:32: [System default] Setting [1mpresence_penalty[0m: 0.0 [2024-06-06 22:21:44] INFO gen_config.py:32: [System default] Setting [1mfrequency_penalty[0m: 0.0 [2024-06-06 22:21:44] INFO gen_config.py:32: [System default] Setting [1mrepetition_penalty[0m: 1.0 [2024-06-06 22:21:44] INFO gen_config.py:32: [System default] Setting [1mtop_p[0m: 1.0 [2024-06-06 22:21:44] INFO gen_config.py:223: Dumping configuration file to: [1m/models/mlc-delivery/hf/mlc-ai/Mixtral-8x7B-Instruct-v0.1-q4f32_1-MLC/mlc-chat-config.json[0m /opt/conda/envs/py310/bin/python -m mlc_llm convert_weight /models/Mixtral-8x7B-Instruct-v0.1 --quantization q4f32_1 --output /models/mlc-delivery/hf/mlc-ai/Mixtral-8x7B-Instruct-v0.1-q4f32_1-MLC [2024-06-06 22:21:46] INFO auto_config.py:116: [92mFound[0m model configuration: /models/Mixtral-8x7B-Instruct-v0.1/config.json [2024-06-06 22:21:47] INFO auto_device.py:79: [92mFound[0m device: cuda:0 [2024-06-06 22:21:49] INFO auto_device.py:88: [91mNot found[0m device: rocm:0 [2024-06-06 22:21:50] INFO auto_device.py:88: [91mNot found[0m device: metal:0 [2024-06-06 22:21:52] INFO auto_device.py:79: [92mFound[0m device: vulkan:0 [2024-06-06 22:21:52] INFO auto_device.py:79: [92mFound[0m device: vulkan:1 [2024-06-06 22:21:52] INFO auto_device.py:79: [92mFound[0m device: vulkan:2 [2024-06-06 22:21:52] INFO auto_device.py:79: [92mFound[0m device: vulkan:3 [2024-06-06 22:21:53] INFO auto_device.py:88: [91mNot found[0m device: opencl:0 [2024-06-06 22:21:53] INFO auto_device.py:35: Using device: [1mcuda:0[0m [2024-06-06 22:21:53] INFO auto_weight.py:71: Finding weights in: /models/Mixtral-8x7B-Instruct-v0.1 [2024-06-06 22:21:53] INFO auto_weight.py:137: [91mNot found[0m Huggingface PyTorch [2024-06-06 22:21:53] INFO auto_weight.py:144: [92mFound[0m source weight format: huggingface-safetensor. Source configuration: /models/Mixtral-8x7B-Instruct-v0.1/model.safetensors.index.json [2024-06-06 22:21:53] INFO auto_weight.py:107: Using source weight configuration: [1m/models/Mixtral-8x7B-Instruct-v0.1/model.safetensors.index.json[0m. Use `--source` to override. [2024-06-06 22:21:53] INFO auto_weight.py:111: Using source weight format: [1mhuggingface-safetensor[0m. Use `--source-format` to override. [2024-06-06 22:21:53] INFO auto_config.py:154: [92mFound[0m model type: [1mmixtral[0m. Use `--model-type` to override. [2024-06-06 22:21:53] INFO llama_model.py:52: [1mcontext_window_size[0m not found in config.json. Falling back to [1mmax_position_embeddings[0m (32768) [2024-06-06 22:21:53] INFO llama_model.py:72: [1mprefill_chunk_size[0m defaults to 2048 [1mWeight conversion with arguments:[0m [1m--config[0m /models/Mixtral-8x7B-Instruct-v0.1/config.json [1m--quantization[0m GroupQuantize(name='q4f32_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) [1m--model-type[0m mixtral [1m--device[0m cuda:0 [1m--source[0m /models/Mixtral-8x7B-Instruct-v0.1/model.safetensors.index.json [1m--source-format[0m huggingface-safetensor [1m--output[0m /models/mlc-delivery/hf/mlc-ai/Mixtral-8x7B-Instruct-v0.1-q4f32_1-MLC Start storing to cache /models/mlc-delivery/hf/mlc-ai/Mixtral-8x7B-Instruct-v0.1-q4f32_1-MLC 0%| | 0/227 [00:00<?, ?it/s] [2024-06-06 22:23:27] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Mixtral-8x7B-Instruct-v0.1/model-00019-of-00019.safetensors 0%| | 0/227 [00:00<?, ?it/s] [2024-06-06 22:23:32] INFO group_quantization.py:217: Compiling quantize function for key: ((32000, 4096), float32, cuda, axis=1, output_transpose=False) 0%| | 0/227 [00:05<?, ?it/s] [2024-06-06 22:23:33] INFO huggingface_loader.py:167: [Quantized] Parameter: "[1mlm_head.q_weight[0m", shape: (32000, 512), dtype: uint32 0%| | 0/227 [00:06<?, ?it/s] [2024-06-06 22:23:33] INFO huggingface_loader.py:167: [Quantized] Parameter: "[1mlm_head.q_scale[0m", shape: (32000, 128), dtype: float32 0%| | 0/227 [00:06<?, ?it/s] 0%| | 1/227 [00:06<23:41, 6.29s/it] [2024-06-06 22:23:33] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Mixtral-8x7B-Instruct-v0.1/model-00018-of-00019.safetensors 0%| | 1/227 [00:06<23:41, 6.29s/it] [2024-06-06 22:23:45] INFO group_quantization.py:217: Compiling quantize function for key: ((8, 28672, 4096), float32, cuda, axis=2, output_transpose=False) 0%| | 1/227 [00:18<23:41, 6.29s/it] [2024-06-06 22:23:46] INFO huggingface_loader.py:167: [Quantized] Parameter: "[1mmodel.layers.30.moe.e1_e3.q_weight[0m", shape: (8, 28672, 512), dtype: uint32 0%| | 1/227 [00:18<23:41, 6.29s/it] [2024-06-06 22:23:47] INFO huggingface_loader.py:167: [Quantized] Parameter: "[1mmodel.layers.30.moe.e1_e3.q_scale[0m", shape: (8, 28672, 128), dtype: float32 0%| | 1/227 [00:19<23:41, 6.29s/it] 1%| | 2/227 [00:20<40:15, 10.74s/it] [2024-06-06 22:23:48] INFO group_quantization.py:217: Compiling quantize function for key: ((8, 4096, 14336), float32, cuda, axis=2, output_transpose=False) 1%| | 2/227 [00:20<40:15, 10.74s/it] [2024-06-06 22:23:48] INFO huggingface_loader.py:167: [Quantized] Parameter: "[1mmodel.layers.30.moe.e2.q_weight[0m", shape: (8, 4096, 1792), dtype: uint32 1%| | 2/227 [00:21<40:15, 10.74s/it] [2024-06-06 22:23:49] INFO huggingface_loader.py:167: [Quantized] Parameter: "[1mmodel.layers.30.moe.e2.q_scale[0m", shape: (8, 4096, 448), dtype: float32 1%| | 2/227 [00:21<40:15, 10.74s/it] 1%|▏ | 3/227 [00:22<24:57, 6.68s/it] [2024-06-06 22:23:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.input_layernorm.weight[0m", shape: (4096,), dtype: float32 1%|▏ | 3/227 [00:22<24:57, 6.68s/it] [2024-06-06 22:23:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 1%|▏ | 3/227 [00:22<24:57, 6.68s/it] 2%|▏ | 5/227 [00:29<21:48, 5.89s/it] Traceback (most recent call last): File "/opt/conda/envs/py310/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/py310/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/__main__.py", line 64, in <module> main() File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/__main__.py", line 37, in main cli.main(sys.argv[2:]) File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/cli/convert_weight.py", line 88, in main convert_weight( File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/interface/convert_weight.py", line 181, in convert_weight _convert_args(args) File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/interface/convert_weight.py", line 145, in _convert_args tvmjs.dump_ndarray_cache( File "/opt/conda/envs/py310/lib/python3.10/site-packages/tvm/contrib/tvmjs.py", line 272, in dump_ndarray_cache for k, origin_v in param_generator: File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/interface/convert_weight.py", line 129, in _param_generator for name, param in loader.load(device=args.device, preshard_funcs=preshard_funcs): File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/loader/huggingface_loader.py", line 118, in load param = self._load_mlc_param(mlc_name, device=device) File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/loader/huggingface_loader.py", line 157, in _load_mlc_param return as_ndarray(param, device=device) File "/opt/conda/envs/py310/lib/python3.10/site-packages/tvm/runtime/ndarray.py", line 675, in array return empty(arr.shape, arr.dtype, device, mem_scope).copyfrom(arr) File "/opt/conda/envs/py310/lib/python3.10/site-packages/tvm/runtime/ndarray.py", line 431, in empty arr = _ffi_api.TVMArrayAllocWithScope(shape, dtype, device, mem_scope) File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__ File "tvm/_ffi/_cython/./packed_func.pxi", line 277, in tvm._ffi._cy3.core.FuncCall File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL File "/opt/conda/envs/py310/lib/python3.10/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error raise py_err tvm.error.InternalError: Traceback (most recent call last): 5: _ZN3tvm7runtime13PackedFun 4: tvm::runtime::TypedPackedFunc<tvm::runtime::NDArray (tvm::runtime::ShapeTuple, DLDataType, DLDevice, tvm::runtime::Optional<tvm::runtime::String>)>::AssignTypedLambda<tvm::runtime::NDArray (*)(tvm::runtime::ShapeTuple, DLDataType, DLDevice, tvm::runtime::Optional<tvm::runtime::String>)>(tvm::runtime::NDArray (*)(tvm::runtime::ShapeTuple, DLDataType, DLDevice, tvm::runtime::Optional<tvm::runtime::String>), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const 3: tvm::runtime::NDArray::Empty(tvm::runtime::ShapeTuple, DLDataType, DLDevice, tvm::runtime::Optional<tvm::runtime::String>) 2: tvm::runtime::DeviceAPI::AllocDataSpace(DLDevice, int, long const*, DLDataType, tvm::runtime::Optional<tvm::runtime::String>) 1: tvm::runtime::CUDADeviceAPI::AllocDataSpace(DLDevice, unsigned long, unsigned long, DLDataType) 0: _ZN3tvm7runtime6deta File "/workspace/tvm/src/runtime/cuda/cuda_device_api.cc", line 145 InternalError: Check failed: (e == cudaSuccess || e == cudaErrorCudartUnloading) is false: CUDA: out of memory |