mistral-7b-instruct-v0.3-mimic4-adapt-l2r / training_l2r_log_2025-05-29_19-25-52.log
deb101's picture
Trained L2R token ranking model on MIMIC-IV
6151342 verified
raw
history blame
42.4 kB
2025-05-29 19:25:52,028 - INFO - 📝 Logging initialized. Log file created at: ../tmp/logs/training_l2r_log_2025-05-29_19-25-52.log - [learning2rank.py:264:setup_logging]
2025-05-29 19:25:52,028 - INFO - ================================================================================ - [learning2rank.py:108:log_section]
2025-05-29 19:25:52,028 - INFO - = 📌 INITIALIZING TRAINING ENVIRONMENT = - [learning2rank.py:109:log_section]
2025-05-29 19:25:52,028 - INFO - ================================================================================ - [learning2rank.py:112:log_section]
2025-05-29 19:25:52,028 - INFO - 🚀 Setting up data paths and environment variables... - [learning2rank.py:3841:main]
2025-05-29 19:25:52,028 - INFO - 🛠️ Command-line Arguments: - [learning2rank.py:280:print_args]
2025-05-29 19:25:52,029 - INFO -
🔹 output_dir: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r
🔹 source_url: XURLs.MIMIC4_DEMO
🔹 data: mimic4_icd10_full
🔹 data_l2r_fname_prefix: mimic4_icd10
🔹 logfile: training_l2r_log
🔹 base_dir: ../tmp/MIMIC4_DEMO
🔹 l2r_boot_dir: ../tmp/MIMIC4_DEMO/mimic4_l2rboot_mistral7b
🔹 hub_model_id: deb101/mistral-7b-instruct-v0.3-mimic4-adapt
🔹 model_name: mistralai/Mistral-7B-Instruct-v0.3
🔹 max_length: 512
🔹 do_fresh_training: True
🔹 load_from_checkpoint: False
🔹 task: l2r
🔹 num_train_epochs: 4
🔹 metric_for_best_model: ndcg@25
🔹 learning_rate: 0.0001
🔹 warmup_steps: 0
🔹 generate_report: False
🔹 logfile_path: ../tmp/logs/training_l2r_log_2025-05-29_19-25-52.log
🔹 source: /home/ubuntu/.xcube/data/mimic4_demo - [learning2rank.py:281:print_args]
2025-05-29 19:25:52,029 - INFO - ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖ - [learning2rank.py:282:print_args]
2025-05-29 19:25:52,029 - INFO - 📁 Using L2R boot directory: ../tmp/MIMIC4_DEMO/mimic4_l2rboot_mistral7b - [learning2rank.py:3855:main]
2025-05-29 19:25:52,029 - INFO - 📂 Using output directory: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r - [learning2rank.py:3857:main]
2025-05-29 19:25:52,029 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:108:log_section]
2025-05-29 19:25:52,029 - INFO - + LOADING DATASETS + - [learning2rank.py:109:log_section]
2025-05-29 19:25:52,029 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:112:log_section]
2025-05-29 19:25:52,029 - INFO - 📊 Loading main dataset and L2R dataset... - [learning2rank.py:3860:main]
2025-05-29 19:25:52,029 - INFO - 📂 Loading main data from: /home/ubuntu/.xcube/data/mimic4_demo/mimic4_icd10_full.csv - [learning2rank.py:310:get_data]
2025-05-29 19:26:00,147 - INFO - Successfully loaded main data: 122279 rows - [learning2rank.py:323:get_data]
2025-05-29 19:26:00,147 - INFO - 📂 Loading L2R data from: ../tmp/MIMIC4_DEMO/mimic4_l2rboot_mistral7b/mimic4_icd10_tok_rank_per_lbl.ft - [learning2rank.py:329:get_data]
2025-05-29 19:26:00,147 - INFO - 📂 Loading L2R tokens from: ../tmp/MIMIC4_DEMO/mimic4_l2rboot_mistral7b/mimic4_icd10_tok.ft - [learning2rank.py:332:get_data]
2025-05-29 19:26:00,147 - INFO - 📂 Loading L2R labels from: ../tmp/MIMIC4_DEMO/mimic4_l2rboot_mistral7b/mimic4_icd10_lbl.ft - [learning2rank.py:335:get_data]
2025-05-29 19:26:01,853 - INFO - Successfully loaded L2R data: 260243456 rows - [learning2rank.py:342:get_data]
2025-05-29 19:26:01,853 - INFO - Successfully loaded L2R tokens: 32768 tokens - [learning2rank.py:343:get_data]
2025-05-29 19:26:01,853 - INFO - Successfully loaded L2R labels: 7942 rows - [learning2rank.py:344:get_data]
2025-05-29 19:26:01,853 - INFO - 🔄 Total data loaded: 122279 main rows, 260243456 L2R rows - [learning2rank.py:351:get_data]
2025-05-29 19:26:01,853 - INFO - Successfully loaded both datasets: - [learning2rank.py:375:load_datasets]
2025-05-29 19:26:01,853 - INFO - - 📄 Main dataset: 122279 records - [learning2rank.py:376:load_datasets]
2025-05-29 19:26:01,853 - INFO - - 📄 L2R dataset: 260243456 records - [learning2rank.py:377:load_datasets]
2025-05-29 19:26:01,853 - INFO - - 🔤 Tokens: 32768 items - [learning2rank.py:378:load_datasets]
2025-05-29 19:26:01,853 - INFO - - 🏷️ Labels: 7942 items - [learning2rank.py:379:load_datasets]
2025-05-29 19:26:01,860 - INFO - Data loading completed successfully - [learning2rank.py:387:load_datasets]
2025-05-29 19:26:03,128 - INFO - Starting quantization of ranks for DataFrame with 260243456 rows. Containing 32768 unique tokens & 7942 unique labels - [learning2rank.py:506:quantize_l2r_data]
2025-05-29 19:26:03,129 - INFO - 🔄 Quantizing those 32768 unique token ranks into 101 quantization levels for each label - [learning2rank.py:531:quantize_l2r_data]
2025-05-29 19:26:49,291 - INFO - Completed quantization: Produced tensor of shape torch.Size([7942, 32768, 4]) with 101 quantization levels per label - [learning2rank.py:585:quantize_l2r_data]
2025-05-29 19:26:49,320 - WARNING - Label 1295: Only 1 tokens with top relevance score (need 50) - [learning2rank.py:701:test_scored_tokens]
2025-05-29 19:26:49,326 - WARNING - Label 4049: Only 1 tokens with top relevance score (need 50) - [learning2rank.py:701:test_scored_tokens]
2025-05-29 19:26:49,330 - WARNING - Label 3517: Only 1 tokens with top relevance score (need 50) - [learning2rank.py:701:test_scored_tokens]
2025-05-29 19:26:49,334 - WARNING - Label 4308: Only 1 tokens with top relevance score (need 50) - [learning2rank.py:701:test_scored_tokens]
2025-05-29 19:26:49,337 - WARNING - Label 519: Only 1 tokens with top relevance score (need 50) - [learning2rank.py:701:test_scored_tokens]
2025-05-29 19:26:49,365 - INFO - ******************************************************************************** - [learning2rank.py:108:log_section]
2025-05-29 19:26:49,365 - INFO - * 🌟 STARTING LEARNING TO RANK MODEL TRAINING * - [learning2rank.py:109:log_section]
2025-05-29 19:26:49,365 - INFO - ******************************************************************************** - [learning2rank.py:112:log_section]
2025-05-29 19:26:49,366 - INFO - 🔐 Loaded authentication token from environment - [learning2rank.py:3889:main]
2025-05-29 19:26:49,366 - INFO - 🏷️ Hub Model ID for this Learning to Rank task: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-l2r - [learning2rank.py:3893:main]
2025-05-29 19:26:49,366 - INFO - -------------------------------------------------------------------------------- - [learning2rank.py:108:log_section]
2025-05-29 19:26:49,366 - INFO - - 📋 MODEL EXISTENCE CHECK - - [learning2rank.py:109:log_section]
2025-05-29 19:26:49,366 - INFO - -------------------------------------------------------------------------------- - [learning2rank.py:112:log_section]
2025-05-29 19:26:49,366 - INFO - 🔍 Checking model existence locally and on Hugging Face Hub... - [learning2rank.py:407:check_model_existence]
2025-05-29 19:26:49,366 - INFO - Model not found locally at: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r - [learning2rank.py:414:check_model_existence]
2025-05-29 19:26:49,481 - INFO - Model exists on Hugging Face Hub with ID: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-l2r - [learning2rank.py:426:check_model_existence]
2025-05-29 19:26:49,481 - INFO - 📁 Model exists either locally or on Hub - [learning2rank.py:452:check_model_existence]
2025-05-29 19:26:49,481 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:108:log_section]
2025-05-29 19:26:49,481 - INFO - + STARTING FRESH TRAINING + - [learning2rank.py:109:log_section]
2025-05-29 19:26:49,481 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:112:log_section]
2025-05-29 19:26:49,481 - INFO - 🔄 Starting fresh training (either forced or model not found)... - [learning2rank.py:3906:main]
2025-05-29 19:26:49,496 - WARNING - Note: Environment variable`HF_TOKEN` is set and is the current active token independently from the token you've just configured. - [_login.py:415:_login]
2025-05-29 19:26:49,496 - INFO - 🔑 Successfully authenticated with Hugging Face Hub - [learning2rank.py:288:authenticate_hf]
2025-05-29 19:26:49,496 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:108:log_section]
2025-05-29 19:26:49,496 - INFO - + LOADING BASE MODEL + - [learning2rank.py:109:log_section]
2025-05-29 19:26:49,496 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:112:log_section]
2025-05-29 19:26:49,496 - INFO - 📥 Loading pretrained model and tokenizer... - [learning2rank.py:3948:main]
2025-05-29 19:26:49,496 - INFO - 🚀 Starting model and tokenizer loading process... - [learning2rank.py:916:load_base_model_and_tokenizer]
2025-05-29 19:26:49,497 - INFO - 📊 Quantization config: BitsAndBytesConfig {
"_load_in_4bit": true,
"_load_in_8bit": false,
"bnb_4bit_compute_dtype": "bfloat16",
"bnb_4bit_quant_storage": "uint8",
"bnb_4bit_quant_type": "nf4",
"bnb_4bit_use_double_quant": true,
"llm_int8_enable_fp32_cpu_offload": false,
"llm_int8_has_fp16_weight": false,
"llm_int8_skip_modules": null,
"llm_int8_threshold": 6.0,
"load_in_4bit": true,
"load_in_8bit": false,
"quant_method": "bitsandbytes"
}
- [learning2rank.py:925:load_base_model_and_tokenizer]
2025-05-29 19:26:49,497 - INFO - 🔤 Loading tokenizer for model: mistralai/Mistral-7B-Instruct-v0.3... - [learning2rank.py:929:load_base_model_and_tokenizer]
2025-05-29 19:26:50,177 - INFO - 📝 Setting pad token to eos token... - [learning2rank.py:933:load_base_model_and_tokenizer]
2025-05-29 19:26:50,177 - INFO - 🧠 Loading base model with quantization... - [learning2rank.py:941:load_base_model_and_tokenizer]
2025-05-29 19:26:50,740 - INFO - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). - [modeling.py:991:get_balanced_memory]
2025-05-29 19:26:56,190 - INFO - 🔧 Setting up default LoRA configuration... - [learning2rank.py:964:load_base_model_and_tokenizer]
2025-05-29 19:26:56,190 - INFO - 🔍 LoRA config: r=16, alpha=32, targets={'o_proj', 'q_proj', 'k_proj', 'v_proj'}, dropout=0.05 - [learning2rank.py:987:load_base_model_and_tokenizer]
2025-05-29 19:26:56,190 - INFO - 🧩 Applying LoRA adapters to model... - [learning2rank.py:994:load_base_model_and_tokenizer]
2025-05-29 19:26:56,374 - INFO - Model and tokenizer successfully loaded! - [learning2rank.py:1001:load_base_model_and_tokenizer]
2025-05-29 19:26:56,374 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:108:log_section]
2025-05-29 19:26:56,374 - INFO - + DATA PREPROCESSING + - [learning2rank.py:109:log_section]
2025-05-29 19:26:56,374 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:112:log_section]
2025-05-29 19:26:56,374 - INFO - 🔄 Loading and preprocessing training data... - [learning2rank.py:3956:main]
2025-05-29 19:26:57,442 - INFO - 🔍 Verifying uniqueness of token IDs per label in scored_tokens... - [learning2rank.py:1279:preprocess_data]
2025-05-29 19:26:58,264 - INFO - All labels have unique token IDs in scored_tokens. - [learning2rank.py:1291:preprocess_data]
2025-05-29 19:26:58,264 - INFO - 🚀 Building a 2D lookup table for efficient token-to-relevance mapping across all labels! 🚀 - [learning2rank.py:1294:preprocess_data]
2025-05-29 19:26:58,264 - INFO - 🔢 Total labels = 7942 - [learning2rank.py:1297:preprocess_data]
2025-05-29 19:26:58,264 - INFO - 🔍 Precomputing token indices and corresponding relevance_values for each label... - [learning2rank.py:1298:preprocess_data]
2025-05-29 19:26:58,492 - INFO - 📊 Lookup table dimensions: 32768 vocabulary size × 7942 labels - [learning2rank.py:1305:preprocess_data]
2025-05-29 19:26:58,492 - INFO - This approach eliminates token comparison broadcasting and provides O(1) lookup time for relevance scores! - [learning2rank.py:1308:preprocess_data]
2025-05-29 19:26:58,492 - INFO - 🧮 Processing relevance calculations vectorized for maximum speed 🔥 - [learning2rank.py:1311:preprocess_data]
2025-05-29 19:26:58,657 - INFO - 🔍 Verifying token mappings with 10 samples... - [learning2rank.py:1344:verify_token_mappings]
2025-05-29 19:26:58,838 - INFO - Token mappings verification completed successfully! 🎉 - [learning2rank.py:1435:verify_token_mappings]
2025-05-29 19:26:58,842 - INFO - 🔄 Processing dataset with map... - [learning2rank.py:1499:preprocess_data]
2025-05-29 19:26:59,159 - INFO - Dataset built in 0h 0m 0.32s - [learning2rank.py:1522:preprocess_data]
2025-05-29 19:26:59,171 - INFO - The size of Training set: 173 🏋️ (Training Data) - [learning2rank.py:1553:preprocess_data]
2025-05-29 19:26:59,172 - INFO - The size of Evaluation set: 35 🔍 (Test Data) - [learning2rank.py:1554:preprocess_data]
2025-05-29 19:26:59,172 - INFO - 🚀 Created HuggingFace Dataset with 208 samples, 7942 labels - [learning2rank.py:1562:preprocess_data]
2025-05-29 19:26:59,172 - INFO - 🏷️ Number of unique ICD-10 codes: 7942 - [learning2rank.py:3969:main]
2025-05-29 19:26:59,173 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:108:log_section]
2025-05-29 19:26:59,173 - INFO - + MODEL INITIALIZATION + - [learning2rank.py:109:log_section]
2025-05-29 19:26:59,173 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:112:log_section]
2025-05-29 19:26:59,173 - INFO - 🧠 Initializing custom L2R model for outputting per-token relevance scores per ICD-10 codes. - [learning2rank.py:3972:main]
2025-05-29 19:27:00,707 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:108:log_section]
2025-05-29 19:27:00,707 - INFO - + TRAINING PREPARATION + - [learning2rank.py:109:log_section]
2025-05-29 19:27:00,707 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:112:log_section]
2025-05-29 19:27:00,707 - INFO - ⚙️ Preparing training components and optimizers... - [learning2rank.py:3979:main]
2025-05-29 19:27:00,739 - INFO - 🖥️ Device: NVIDIA GH200 480GB - [learning2rank.py:2007:log_training_configuration]
2025-05-29 19:27:00,739 - INFO - 🔋 CUDA Available: True - [learning2rank.py:2010:log_training_configuration]
2025-05-29 19:27:00,739 - INFO - 💾 CUDA Device Count: 1 - [learning2rank.py:2011:log_training_configuration]
2025-05-29 19:27:00,740 - INFO -
📋 Training Configuration 📋
+----------+-----------------------------+--------------------------------------------------+
| 🌟 Emoji | 🏷️ Parameter | 📊 Value |
+----------+-----------------------------+--------------------------------------------------+
| 📁 | Output Directory | ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r |
| 🔁 | Training Epochs | 4 |
| 🏋️ | Train Batch Size | 1 |
| 🔍 | Eval Batch Size | 1 |
| 📊 | Gradient Accumulation Steps | 4 |
| 🚀 | Learning Rate | 0.0001 |
| 🌅 | Warmup Steps | 0 |
| 💾 | Save Strategy | epoch |
| 💾 | Save Total Limit | 10 |
| 📊 | Evaluation Strategy | epoch |
| 🎯 | Best Model Metric | ndcg@25 |
| 📝 | Logging Strategy | steps (every 10 steps) |
| 🌐 | Push to Hub | True |
| 🌐 | Hub Model ID | deb101/mistral-7b-instruct-v0.3-mimic4-adapt-l2r |
| 🔢 | Steps per Epoch | 43 |
| 🔢 | Total Training Steps | 172 |
| 🔢 | Evaluation Steps | 35 |
| 📊 | Training Dataset Size | 173 samples 🏋️ |
| 📊 | Evaluation Dataset Size | 35 samples 🔍 |
+----------+-----------------------------+--------------------------------------------------+ - [learning2rank.py:1999:log_training_args]
2025-05-29 19:27:00,740 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:108:log_section]
2025-05-29 19:27:00,740 - INFO - + MODEL TRAINING + - [learning2rank.py:109:log_section]
2025-05-29 19:27:00,740 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:112:log_section]
2025-05-29 19:27:00,740 - INFO - 🏋️ Starting model training process... - [learning2rank.py:3999:main]
2025-05-29 19:27:00,740 - INFO - 🏁 Preparing Custom Trainer 🛠️ - [learning2rank.py:3200:train_model]
2025-05-29 19:27:00,784 - INFO - We are registering the tokenizer mistralai/Mistral-7B-Instruct-v0.3 in Custom Trainer - [learning2rank.py:2639:__init__]
2025-05-29 19:27:00,784 - INFO - 🏋️ Commencing Model Training 💪 - [learning2rank.py:3241:train_model]
2025-05-29 19:27:01,050 - INFO - 🚀 Starting Training... - [learning2rank.py:2389:on_train_begin]
2025-05-29 19:27:20,294 - INFO -
🚂 Training Metrics (Step 10) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -4.58268e+16 |
+---------------+--------------+
| grad_norm | nan |
+---------------+--------------+
| learning_rate | 0.0001 |
+---------------+--------------+
| epoch | 0.231214 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:27:34,347 - INFO -
🚂 Training Metrics (Step 20) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -4.34403e+17 |
+---------------+--------------+
| grad_norm | nan |
+---------------+--------------+
| learning_rate | 0.0001 |
+---------------+--------------+
| epoch | 0.462428 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:27:48,535 - INFO -
🚂 Training Metrics (Step 30) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -6.33341e+16 |
+---------------+--------------+
| grad_norm | nan |
+---------------+--------------+
| learning_rate | 0.0001 |
+---------------+--------------+
| epoch | 0.693642 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:28:02,663 - INFO -
🚂 Training Metrics (Step 40) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -6.34723e+18 |
+---------------+--------------+
| grad_norm | nan |
+---------------+--------------+
| learning_rate | 0.0001 |
+---------------+--------------+
| epoch | 0.924855 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:28:07,197 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [learning2rank.py:2755:evaluate]
2025-05-29 19:31:21,756 - WARNING - No valid samples found for metric 'precision@25'. - [learning2rank.py:2859:evaluate]
2025-05-29 19:31:21,757 - INFO -
🔍 Evaluation Metrics 🔍
+-------------------+--------------+
| Metric | Value |
+===================+==============+
| eval_loss | -1.17021e+17 |
+-------------------+--------------+
| eval_ndcg | 0.955488 |
+-------------------+--------------+
| eval_ndcg@25 | 0.18984 |
+-------------------+--------------+
| eval_precision@25 | 0 |
+-------------------+--------------+ - [learning2rank.py:2326:on_evaluate]
2025-05-29 19:31:23,021 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-44 - [learning2rank.py:2879:_save]
2025-05-29 19:31:23,022 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-44 - [learning2rank.py:2884:_save]
2025-05-29 19:31:23,023 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-44:
+---------+-------------------+------------+
| Index | Saved File | Size |
+=========+===================+============+
| 1 | training_args.bin | 0.01 MB |
+---------+-------------------+------------+
| 2 | model.safetensors | 4122.74 MB |
+---------+-------------------+------------+
| 3 | config.json | 0.00 MB |
+---------+-------------------+------------+ - [learning2rank.py:2901:_save]
2025-05-29 19:31:36,370 - INFO -
🚂 Training Metrics (Step 50) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -4.29741e+18 |
+---------------+--------------+
| grad_norm | nan |
+---------------+--------------+
| learning_rate | 0.0001 |
+---------------+--------------+
| epoch | 1.13873 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:31:50,557 - INFO -
🚂 Training Metrics (Step 60) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -1.10586e+17 |
+---------------+--------------+
| grad_norm | nan |
+---------------+--------------+
| learning_rate | 9.9e-05 |
+---------------+--------------+
| epoch | 1.36994 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:32:04,594 - INFO -
🚂 Training Metrics (Step 70) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -6.65756e+18 |
+---------------+--------------+
| grad_norm | 2.41222e+18 |
+---------------+--------------+
| learning_rate | 9.4e-05 |
+---------------+--------------+
| epoch | 1.60116 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:32:18,534 - INFO -
🚂 Training Metrics (Step 80) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -9.52556e+16 |
+---------------+--------------+
| grad_norm | 1.31876e+18 |
+---------------+--------------+
| learning_rate | 8.8e-05 |
+---------------+--------------+
| epoch | 1.83237 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:32:28,622 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [learning2rank.py:2755:evaluate]
2025-05-29 19:35:41,610 - INFO -
🔍 Evaluation Metrics 🔍
+-------------------+--------------+
| Metric | Value |
+===================+==============+
| eval_loss | -3.19605e+17 |
+-------------------+--------------+
| eval_ndcg | 0.956218 |
+-------------------+--------------+
| eval_ndcg@25 | 0.644859 |
+-------------------+--------------+
| eval_precision@25 | 0.524571 |
+-------------------+--------------+ - [learning2rank.py:2326:on_evaluate]
2025-05-29 19:35:42,773 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-88 - [learning2rank.py:2879:_save]
2025-05-29 19:35:42,774 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-88 - [learning2rank.py:2884:_save]
2025-05-29 19:35:42,775 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-88:
+---------+-------------------+------------+
| Index | Saved File | Size |
+=========+===================+============+
| 1 | training_args.bin | 0.01 MB |
+---------+-------------------+------------+
| 2 | model.safetensors | 4122.74 MB |
+---------+-------------------+------------+
| 3 | config.json | 0.00 MB |
+---------+-------------------+------------+ - [learning2rank.py:2901:_save]
2025-05-29 19:35:50,541 - INFO -
🚂 Training Metrics (Step 90) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -7.78904e+17 |
+---------------+--------------+
| grad_norm | 4.04588e+17 |
+---------------+--------------+
| learning_rate | 8.3e-05 |
+---------------+--------------+
| epoch | 2.04624 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:36:04,506 - INFO -
🚂 Training Metrics (Step 100) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -2.16708e+17 |
+---------------+--------------+
| grad_norm | 6.69977e+17 |
+---------------+--------------+
| learning_rate | 7.7e-05 |
+---------------+--------------+
| epoch | 2.27746 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:36:18,486 - INFO -
🚂 Training Metrics (Step 110) 🚂
+---------------+-------------+
| Metric | Value |
+===============+=============+
| loss | -3.3602e+18 |
+---------------+-------------+
| grad_norm | nan |
+---------------+-------------+
| learning_rate | 7.3e-05 |
+---------------+-------------+
| epoch | 2.50867 |
+---------------+-------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:36:32,497 - INFO -
🚂 Training Metrics (Step 120) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -4.15132e+17 |
+---------------+--------------+
| grad_norm | 7.31787e+17 |
+---------------+--------------+
| learning_rate | 6.7e-05 |
+---------------+--------------+
| epoch | 2.73988 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:36:46,461 - INFO -
🚂 Training Metrics (Step 130) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -1.23755e+18 |
+---------------+--------------+
| grad_norm | 2.11757e+17 |
+---------------+--------------+
| learning_rate | 6.2e-05 |
+---------------+--------------+
| epoch | 2.9711 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:36:48,212 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [learning2rank.py:2755:evaluate]
2025-05-29 19:40:00,995 - INFO -
🔍 Evaluation Metrics 🔍
+-------------------+--------------+
| Metric | Value |
+===================+==============+
| eval_loss | -4.05479e+17 |
+-------------------+--------------+
| eval_ndcg | 0.956806 |
+-------------------+--------------+
| eval_ndcg@25 | 0.793177 |
+-------------------+--------------+
| eval_precision@25 | 0.890286 |
+-------------------+--------------+ - [learning2rank.py:2326:on_evaluate]
2025-05-29 19:40:02,153 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-132 - [learning2rank.py:2879:_save]
2025-05-29 19:40:02,154 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-132 - [learning2rank.py:2884:_save]
2025-05-29 19:40:02,155 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-132:
+---------+-------------------+------------+
| Index | Saved File | Size |
+=========+===================+============+
| 1 | training_args.bin | 0.01 MB |
+---------+-------------------+------------+
| 2 | model.safetensors | 4122.74 MB |
+---------+-------------------+------------+
| 3 | config.json | 0.00 MB |
+---------+-------------------+------------+ - [learning2rank.py:2901:_save]
2025-05-29 19:40:18,266 - INFO -
🚂 Training Metrics (Step 140) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -9.65112e+18 |
+---------------+--------------+
| grad_norm | 1.7024e+17 |
+---------------+--------------+
| learning_rate | 5.7e-05 |
+---------------+--------------+
| epoch | 3.18497 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:40:32,114 - INFO -
🚂 Training Metrics (Step 150) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -6.81527e+18 |
+---------------+--------------+
| grad_norm | 1.33906e+17 |
+---------------+--------------+
| learning_rate | 5.2e-05 |
+---------------+--------------+
| epoch | 3.41619 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:40:45,965 - INFO -
🚂 Training Metrics (Step 160) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -4.90954e+17 |
+---------------+--------------+
| grad_norm | 2.87998e+17 |
+---------------+--------------+
| learning_rate | 4.6e-05 |
+---------------+--------------+
| epoch | 3.6474 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:40:59,736 - INFO -
🚂 Training Metrics (Step 170) 🚂
+---------------+--------------+
| Metric | Value |
+===============+==============+
| loss | -3.49743e+17 |
+---------------+--------------+
| grad_norm | 2.0459e+17 |
+---------------+--------------+
| learning_rate | 4e-05 |
+---------------+--------------+
| epoch | 3.87861 |
+---------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:41:03,699 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-172 - [learning2rank.py:2879:_save]
2025-05-29 19:41:03,701 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-172 - [learning2rank.py:2884:_save]
2025-05-29 19:41:03,701 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-172:
+---------+-------------------+------------+
| Index | Saved File | Size |
+=========+===================+============+
| 1 | training_args.bin | 0.01 MB |
+---------+-------------------+------------+
| 2 | model.safetensors | 4122.74 MB |
+---------+-------------------+------------+
| 3 | config.json | 0.00 MB |
+---------+-------------------+------------+ - [learning2rank.py:2901:_save]
2025-05-29 19:41:03,908 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [learning2rank.py:2755:evaluate]
2025-05-29 19:44:16,555 - INFO -
🔍 Evaluation Metrics 🔍
+-------------------+-------------+
| Metric | Value |
+===================+=============+
| eval_loss | -4.1649e+17 |
+-------------------+-------------+
| eval_ndcg | 0.956866 |
+-------------------+-------------+
| eval_ndcg@25 | 0.873974 |
+-------------------+-------------+
| eval_precision@25 | 0.930286 |
+-------------------+-------------+ - [learning2rank.py:2326:on_evaluate]
2025-05-29 19:44:18,478 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-172 - [learning2rank.py:2879:_save]
2025-05-29 19:44:18,479 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-172 - [learning2rank.py:2884:_save]
2025-05-29 19:44:18,480 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-172:
+---------+--------------------+------------+
| Index | Saved File | Size |
+=========+====================+============+
| 1 | training_args.bin | 0.01 MB |
+---------+--------------------+------------+
| 2 | optimizer.pt | 352.39 MB |
+---------+--------------------+------------+
| 3 | model.safetensors | 4122.74 MB |
+---------+--------------------+------------+
| 4 | scaler.pt | 0.00 MB |
+---------+--------------------+------------+
| 5 | config.json | 0.00 MB |
+---------+--------------------+------------+
| 6 | scheduler.pt | 0.00 MB |
+---------+--------------------+------------+
| 7 | trainer_state.json | 0.00 MB |
+---------+--------------------+------------+
| 8 | rng_state.pth | 0.01 MB |
+---------+--------------------+------------+ - [learning2rank.py:2901:_save]
2025-05-29 19:44:18,765 - INFO - 📂 Loading best model from ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-172 - [learning2rank.py:2957:_load_best_model]
2025-05-29 19:44:18,765 - INFO - 🖥️ Model is on device: cuda:0 - [learning2rank.py:2967:_load_best_model]
2025-05-29 19:44:18,821 - INFO - 🔑 Key order comparison:
+---------+------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+
| Index | Saved state_dict Keys | Model state_dict Keys |
+=========+========================================================================+=============================================================================================+
| 1 | base_model.base_model.model.lm_head.weight | base_model.base_model.model.model.embed_tokens.weight |
+---------+------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+
| 2 | base_model.base_model.model.model.embed_tokens.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight |
+---------+------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+
| 3 | base_model.base_model.model.model.layers.0.input_layernorm.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight.absmax |
+---------+------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+
| 4 | base_model.base_model.model.model.layers.0.mlp.down_proj.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight.quant_map |
+---------+------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+
| 5 | base_model.base_model.model.model.layers.0.mlp.down_proj.weight.absmax | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight.nested_absmax |
+---------+------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+ - [learning2rank.py:2991:_load_best_model]
2025-05-29 19:44:19,629 - INFO - Loaded best model weights from ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/checkpoint-172/model.safetensors - [learning2rank.py:3008:_load_best_model]
2025-05-29 19:44:19,686 - INFO - ✔️ Weight for base_model.base_model.model.model.embed_tokens.weight matches between saved and loaded state_dict - [learning2rank.py:3020:_load_best_model]
2025-05-29 19:44:19,708 - INFO - ✔️ Weight for base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight matches between saved and loaded state_dict - [learning2rank.py:3020:_load_best_model]
2025-05-29 19:44:19,725 - INFO -
🚂 Training Metrics (Step 172) 🚂
+--------------------------+--------------+
| Metric | Value |
+==========================+==============+
| train_runtime | 1038.68 |
+--------------------------+--------------+
| train_samples_per_second | 0.666 |
+--------------------------+--------------+
| train_steps_per_second | 0.166 |
+--------------------------+--------------+
| total_flos | 0 |
+--------------------------+--------------+
| train_loss | -2.40817e+18 |
+--------------------------+--------------+
| epoch | 3.92485 |
+--------------------------+--------------+ - [learning2rank.py:2307:on_log]
2025-05-29 19:44:19,726 - INFO - Training Completed! - [learning2rank.py:2456:on_train_end]
2025-05-29 19:44:19,800 - INFO - 📊 Training loss plot saved as '../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/train_loss_plot.png' - [learning2rank.py:2559:on_train_end]
2025-05-29 19:44:19,858 - INFO - 📊 Evaluation loss plot saved as '../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/eval_loss_plot.png' - [learning2rank.py:2573:on_train_end]
2025-05-29 19:44:19,918 - INFO - 📊 Evaluation metric plot saved as '../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r/eval_ndcg@25_plot.png' - [learning2rank.py:2594:on_train_end]
2025-05-29 19:44:19,919 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:108:log_section]
2025-05-29 19:44:19,919 - INFO - + MODEL SAVING + - [learning2rank.py:109:log_section]
2025-05-29 19:44:19,919 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:112:log_section]
2025-05-29 19:44:19,919 - INFO - 💾 Saving trained model and pushing to Hugging Face Hub... - [learning2rank.py:4014:main]
2025-05-29 19:44:19,919 - INFO - 📁 Creating/using output directory: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r - [learning2rank.py:3275:save_and_push]
2025-05-29 19:44:21,100 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r - [learning2rank.py:2879:_save]
2025-05-29 19:44:21,102 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r - [learning2rank.py:2884:_save]
2025-05-29 19:44:21,103 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r:
+---------+-----------------------+------------+
| Index | Saved File | Size |
+=========+=======================+============+
| 1 | eval_loss_plot.png | 0.03 MB |
+---------+-----------------------+------------+
| 2 | training_args.bin | 0.01 MB |
+---------+-----------------------+------------+
| 3 | model.safetensors | 4122.74 MB |
+---------+-----------------------+------------+
| 4 | eval_ndcg@25_plot.png | 0.03 MB |
+---------+-----------------------+------------+
| 5 | config.json | 0.00 MB |
+---------+-----------------------+------------+
| 6 | train_loss_plot.png | 0.04 MB |
+---------+-----------------------+------------+ - [learning2rank.py:2901:_save]
2025-05-29 19:44:24,684 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r - [learning2rank.py:2879:_save]
2025-05-29 19:44:24,685 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r - [learning2rank.py:2884:_save]
2025-05-29 19:44:24,686 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r:
+---------+-----------------------+------------+
| Index | Saved File | Size |
+=========+=======================+============+
| 1 | eval_loss_plot.png | 0.03 MB |
+---------+-----------------------+------------+
| 2 | training_args.bin | 0.01 MB |
+---------+-----------------------+------------+
| 3 | model.safetensors | 4122.74 MB |
+---------+-----------------------+------------+
| 4 | eval_ndcg@25_plot.png | 0.03 MB |
+---------+-----------------------+------------+
| 5 | config.json | 0.00 MB |
+---------+-----------------------+------------+
| 6 | train_loss_plot.png | 0.04 MB |
+---------+-----------------------+------------+ - [learning2rank.py:2901:_save]
2025-05-29 19:45:41,784 - INFO - 💾 Model saved to: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r - [learning2rank.py:3279:save_and_push]
2025-05-29 19:45:41,814 - INFO - 🖌️ Tokenizer saved to: ../tmp/MIMIC4_DEMO/mistral7b_mimic4_l2r - [learning2rank.py:3283:save_and_push]