deb101 commited on
Commit
2339d74
·
verified ·
1 Parent(s): 5b02b14

Trained L2R token ranking model on MIMIC-IV

Browse files
config.json CHANGED
@@ -1,8 +1,7 @@
1
  {
2
- "_attn_implementation_autoset": true,
3
  "_name_or_path": "mistralai/Mistral-7B-Instruct-v0.3",
4
  "architectures": [
5
- "MistralForCausalLM"
6
  ],
7
  "attention_dropout": 0.0,
8
  "bos_token_id": 1,
@@ -15926,7 +15925,7 @@
15926
  "rope_theta": 1000000.0,
15927
  "sliding_window": null,
15928
  "tie_word_embeddings": false,
15929
- "torch_dtype": "bfloat16",
15930
  "transformers_version": "4.49.0",
15931
  "use_cache": true,
15932
  "vocab_size": 32768
 
1
  {
 
2
  "_name_or_path": "mistralai/Mistral-7B-Instruct-v0.3",
3
  "architectures": [
4
+ "LTRModel"
5
  ],
6
  "attention_dropout": 0.0,
7
  "bos_token_id": 1,
 
15925
  "rope_theta": 1000000.0,
15926
  "sliding_window": null,
15927
  "tie_word_embeddings": false,
15928
+ "torch_dtype": "float32",
15929
  "transformers_version": "4.49.0",
15930
  "use_cache": true,
15931
  "vocab_size": 32768
ground_model/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:becfd0480e4c961e227b6af3a815d48ee7eb2863205ded4ff91d1794133f96ac
3
  size 54560368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db8f0b8f4ee1b80398c66c7791b0ecd23ba19786b0c8e88e94ade7cb0f8ee572
3
  size 54560368
training_l2r_log_2025-06-12_20-52-17.log ADDED
@@ -0,0 +1,934 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-06-12 20:52:17,809 - INFO - 📝 Logging initialized. Log file created at: ../tmp/logs/training_l2r_log_2025-06-12_20-52-17.log - [learning2rank.py:287:setup_logger]
2
+ 2025-06-12 20:52:17,809 - INFO - ================================================================================ - [learning2rank.py:109:log_section]
3
+ 2025-06-12 20:52:17,809 - INFO - = 📌 INITIALIZING TRAINING ENVIRONMENT = - [learning2rank.py:110:log_section]
4
+ 2025-06-12 20:52:17,809 - INFO - ================================================================================ - [learning2rank.py:113:log_section]
5
+ 2025-06-12 20:52:17,809 - INFO - 🚀 Setting up data paths and environment variables... - [learning2rank.py:3785:main]
6
+ 2025-06-12 20:52:17,810 - INFO - 🛠️ Command-line Arguments: - [learning2rank.py:303:print_args]
7
+ 2025-06-12 20:52:17,810 - INFO -
8
+ 🔹 output_dir: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b
9
+ 🔹 source_url: XURLs.MIMIC4_DEMO
10
+ 🔹 data: mimic4_icd10_full
11
+ 🔹 data_l2r_fname_prefix: mimic4_icd10
12
+ 🔹 logfile: training_l2r_log
13
+ 🔹 base_dir: ../tmp/MIMIC4_DEMO
14
+ 🔹 l2r_boot_dir: ../tmp/MIMIC4_DEMO/mimic4_l2rboot_mistral7b
15
+ 🔹 hub_model_id: deb101/mistral-7b-instruct-v0.3-mimic4-adapt
16
+ 🔹 model_name: mistralai/Mistral-7B-Instruct-v0.3
17
+ 🔹 max_length: 512
18
+ 🔹 do_fresh_training: True
19
+ 🔹 load_from_checkpoint: False
20
+ 🔹 task: l2r
21
+ 🔹 num_train_epochs: 5
22
+ 🔹 metric_for_best_model: ndcg@25
23
+ 🔹 learning_rate: 0.0001
24
+ 🔹 warmup_steps: 0
25
+ 🔹 generate_report: False
26
+ 🔹 logfile_path: ../tmp/logs/training_l2r_log_2025-06-12_20-52-17.log
27
+ 🔹 source: /home/ubuntu/.xcube/data/mimic4_demo - [learning2rank.py:304:print_args]
28
+ 2025-06-12 20:52:17,810 - INFO - ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖ - [learning2rank.py:305:print_args]
29
+ 2025-06-12 20:52:17,821 - INFO -
30
+ 🚀 Quick Git Info: 📁 xcube | 🌿 plant | 🔍 b97f5b9 | 👤 Debjyoti Saha Roy | ⚡ MIXED (1 staged, 1 unstaged) | 🔬 git show b97f5b9 - [learning2rank.py:3796:main]
31
+ 2025-06-12 20:52:17,821 - INFO - 📁 Using L2R boot directory: ../tmp/MIMIC4_DEMO/mimic4_l2rboot_mistral7b - [learning2rank.py:3802:main]
32
+ 2025-06-12 20:52:17,821 - INFO - 📂 Using output directory: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [learning2rank.py:3804:main]
33
+ 2025-06-12 20:52:17,821 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:109:log_section]
34
+ 2025-06-12 20:52:17,821 - INFO - + ✨ LOADING DATASETS + - [learning2rank.py:110:log_section]
35
+ 2025-06-12 20:52:17,821 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:113:log_section]
36
+ 2025-06-12 20:52:17,821 - INFO - 📊 Loading main dataset and L2R dataset... - [learning2rank.py:3807:main]
37
+ 2025-06-12 20:52:17,821 - INFO - 📂 Loading main data from: /home/ubuntu/.xcube/data/mimic4_demo/mimic4_icd10_full.csv - [learning2rank.py:333:get_data]
38
+ 2025-06-12 20:52:26,226 - INFO - ✅ Successfully loaded main data: 122279 rows - [learning2rank.py:346:get_data]
39
+ 2025-06-12 20:52:26,226 - INFO - 📂 Loading L2R data from: ../tmp/MIMIC4_DEMO/mimic4_l2rboot_mistral7b/mimic4_icd10_tok_rank_per_lbl.ft - [learning2rank.py:352:get_data]
40
+ 2025-06-12 20:52:26,226 - INFO - 📂 Loading L2R tokens from: ../tmp/MIMIC4_DEMO/mimic4_l2rboot_mistral7b/mimic4_icd10_tok.ft - [learning2rank.py:355:get_data]
41
+ 2025-06-12 20:52:26,226 - INFO - 📂 Loading L2R labels from: ../tmp/MIMIC4_DEMO/mimic4_l2rboot_mistral7b/mimic4_icd10_lbl.ft - [learning2rank.py:358:get_data]
42
+ 2025-06-12 20:52:27,974 - INFO - ✅ Successfully loaded L2R data: 260243456 rows - [learning2rank.py:365:get_data]
43
+ 2025-06-12 20:52:27,975 - INFO - ✅ Successfully loaded L2R tokens: 32768 tokens - [learning2rank.py:366:get_data]
44
+ 2025-06-12 20:52:27,975 - INFO - ✅ Successfully loaded L2R labels: 7942 rows - [learning2rank.py:367:get_data]
45
+ 2025-06-12 20:52:27,975 - INFO - 🔄 Total data loaded: 122279 main rows, 260243456 L2R rows - [learning2rank.py:374:get_data]
46
+ 2025-06-12 20:52:27,975 - INFO - ✨ Successfully loaded both datasets: - [learning2rank.py:398:load_datasets]
47
+ 2025-06-12 20:52:27,975 - INFO - - 📄 Main dataset: 122279 records - [learning2rank.py:399:load_datasets]
48
+ 2025-06-12 20:52:27,975 - INFO - - 📄 L2R dataset: 260243456 records - [learning2rank.py:400:load_datasets]
49
+ 2025-06-12 20:52:27,975 - INFO - - 🔤 Tokens: 32768 items - [learning2rank.py:401:load_datasets]
50
+ 2025-06-12 20:52:27,975 - INFO - - 🏷️ Labels: 7942 items - [learning2rank.py:402:load_datasets]
51
+ 2025-06-12 20:52:27,982 - INFO - ✅ Data loading completed successfully - [learning2rank.py:410:load_datasets]
52
+ 2025-06-12 20:52:29,271 - INFO - ⏳ Starting quantization of ranks for DataFrame with 260243456 rows. Containing 32768 unique tokens & 7942 unique labels - [learning2rank.py:529:quantize_l2r_data]
53
+ 2025-06-12 20:52:29,272 - INFO - 🔄 Quantizing those 32768 unique token ranks into 101 quantization levels for each label - [learning2rank.py:554:quantize_l2r_data]
54
+ 2025-06-12 20:53:15,689 - INFO - ✅ Completed quantization: Produced tensor of shape torch.Size([7942, 32768, 4]) with 101 quantization levels per label - [learning2rank.py:608:quantize_l2r_data]
55
+ 2025-06-12 20:53:15,715 - WARNING - Label 2051: Only 1 tokens with top relevance score (need 50) - [learning2rank.py:724:test_scored_tokens]
56
+ 2025-06-12 20:53:15,720 - WARNING - Label 3536: Only 1 tokens with top relevance score (need 50) - [learning2rank.py:724:test_scored_tokens]
57
+ 2025-06-12 20:53:15,724 - WARNING - Label 1179: Only 1 tokens with top relevance score (need 50) - [learning2rank.py:724:test_scored_tokens]
58
+ 2025-06-12 20:53:15,728 - WARNING - Label 6454: Only 1 tokens with top relevance score (need 50) - [learning2rank.py:724:test_scored_tokens]
59
+ 2025-06-12 20:53:15,731 - WARNING - Label 1892: Only 1 tokens with top relevance score (need 50) - [learning2rank.py:724:test_scored_tokens]
60
+ 2025-06-12 20:53:15,759 - INFO - ******************************************************************************** - [learning2rank.py:109:log_section]
61
+ 2025-06-12 20:53:15,759 - INFO - * 🌟 STARTING LEARNING TO RANK MODEL TRAINING * - [learning2rank.py:110:log_section]
62
+ 2025-06-12 20:53:15,759 - INFO - ******************************************************************************** - [learning2rank.py:113:log_section]
63
+ 2025-06-12 20:53:15,759 - INFO - 🔐 Loaded authentication token from environment - [learning2rank.py:3836:main]
64
+ 2025-06-12 20:53:15,759 - INFO - 🏷️ Hub Model ID for this Learning to Rank task: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-l2r - [learning2rank.py:3840:main]
65
+ 2025-06-12 20:53:15,760 - INFO - -------------------------------------------------------------------------------- - [learning2rank.py:109:log_section]
66
+ 2025-06-12 20:53:15,760 - INFO - - 📋 MODEL EXISTENCE CHECK - - [learning2rank.py:110:log_section]
67
+ 2025-06-12 20:53:15,760 - INFO - -------------------------------------------------------------------------------- - [learning2rank.py:113:log_section]
68
+ 2025-06-12 20:53:15,760 - INFO - 🔍 Checking model existence locally and on Hugging Face Hub... - [learning2rank.py:430:check_model_existence]
69
+ 2025-06-12 20:53:15,760 - INFO - ✅ Model exists locally at: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [learning2rank.py:435:check_model_existence]
70
+ 2025-06-12 20:53:15,806 - INFO - ✅ Model exists on Hugging Face Hub with ID: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-l2r - [learning2rank.py:449:check_model_existence]
71
+ 2025-06-12 20:53:15,806 - INFO - 📁 Model exists either locally or on Hub - [learning2rank.py:475:check_model_existence]
72
+ 2025-06-12 20:53:15,807 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:109:log_section]
73
+ 2025-06-12 20:53:15,807 - INFO - + ✨ STARTING FRESH TRAINING + - [learning2rank.py:110:log_section]
74
+ 2025-06-12 20:53:15,807 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:113:log_section]
75
+ 2025-06-12 20:53:15,807 - INFO - 🔄 Starting fresh training (either forced or model not found)... - [learning2rank.py:3853:main]
76
+ 2025-06-12 20:53:15,834 - WARNING - Note: Environment variable`HF_TOKEN` is set and is the current active token independently from the token you've just configured. - [_login.py:415:_login]
77
+ 2025-06-12 20:53:15,834 - INFO - 🔑 Successfully authenticated with Hugging Face Hub - [learning2rank.py:311:authenticate_hf]
78
+ 2025-06-12 20:53:15,834 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:109:log_section]
79
+ 2025-06-12 20:53:15,834 - INFO - + ✨ LOADING BASE MODEL + - [learning2rank.py:110:log_section]
80
+ 2025-06-12 20:53:15,834 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:113:log_section]
81
+ 2025-06-12 20:53:15,834 - INFO - 📥 Loading pretrained model and tokenizer... - [learning2rank.py:3895:main]
82
+ 2025-06-12 20:53:15,834 - INFO - 🚀 Starting model and tokenizer loading process... - [learning2rank.py:939:load_base_model_and_tokenizer]
83
+ 2025-06-12 20:53:15,835 - INFO - 📊 Quantization config: BitsAndBytesConfig {
84
+ "_load_in_4bit": true,
85
+ "_load_in_8bit": false,
86
+ "bnb_4bit_compute_dtype": "bfloat16",
87
+ "bnb_4bit_quant_storage": "uint8",
88
+ "bnb_4bit_quant_type": "nf4",
89
+ "bnb_4bit_use_double_quant": true,
90
+ "llm_int8_enable_fp32_cpu_offload": false,
91
+ "llm_int8_has_fp16_weight": false,
92
+ "llm_int8_skip_modules": null,
93
+ "llm_int8_threshold": 6.0,
94
+ "load_in_4bit": true,
95
+ "load_in_8bit": false,
96
+ "quant_method": "bitsandbytes"
97
+ }
98
+ - [learning2rank.py:948:load_base_model_and_tokenizer]
99
+ 2025-06-12 20:53:15,835 - INFO - 🔤 Loading tokenizer for model: mistralai/Mistral-7B-Instruct-v0.3... - [learning2rank.py:952:load_base_model_and_tokenizer]
100
+ 2025-06-12 20:53:16,125 - INFO - 📝 Setting pad token to eos token... - [learning2rank.py:956:load_base_model_and_tokenizer]
101
+ 2025-06-12 20:53:16,125 - INFO - 🧠 Loading base model with quantization... - [learning2rank.py:964:load_base_model_and_tokenizer]
102
+ 2025-06-12 20:53:16,629 - INFO - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). - [modeling.py:991:get_balanced_memory]
103
+ 2025-06-12 20:53:21,863 - INFO - 🔧 Setting up default LoRA configuration... - [learning2rank.py:987:load_base_model_and_tokenizer]
104
+ 2025-06-12 20:53:21,863 - INFO - 🔍 LoRA config: r=16, alpha=32, targets={'k_proj', 'o_proj', 'q_proj', 'v_proj'}, dropout=0.05 - [learning2rank.py:1010:load_base_model_and_tokenizer]
105
+ 2025-06-12 20:53:21,863 - INFO - 🧩 Applying LoRA adapters to model... - [learning2rank.py:1017:load_base_model_and_tokenizer]
106
+ 2025-06-12 20:53:22,044 - INFO - 📊 trainable params: 13,631,488 || all params: 7,261,655,040 || trainable%: 0.1877 - [learning2rank.py:135:log_print_output]
107
+ 2025-06-12 20:53:22,044 - INFO - ✅ Model and tokenizer successfully loaded! - [learning2rank.py:1024:load_base_model_and_tokenizer]
108
+ 2025-06-12 20:53:22,044 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:109:log_section]
109
+ 2025-06-12 20:53:22,044 - INFO - + ✨ DATA PREPROCESSING + - [learning2rank.py:110:log_section]
110
+ 2025-06-12 20:53:22,044 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:113:log_section]
111
+ 2025-06-12 20:53:22,044 - INFO - 🔄 Loading and preprocessing training data... - [learning2rank.py:3903:main]
112
+ 2025-06-12 20:53:23,121 - INFO - 🔍 Verifying uniqueness of token IDs per label in scored_tokens... - [learning2rank.py:1302:preprocess_data]
113
+ 2025-06-12 20:53:23,927 - INFO - ✅ All labels have unique token IDs in scored_tokens. - [learning2rank.py:1314:preprocess_data]
114
+ 2025-06-12 20:53:23,927 - INFO - 🚀 Building a 2D lookup table for efficient token-to-relevance mapping across all labels! 🚀 - [learning2rank.py:1317:preprocess_data]
115
+ 2025-06-12 20:53:23,927 - INFO - 🔢 Total labels = 7942 - [learning2rank.py:1320:preprocess_data]
116
+ 2025-06-12 20:53:23,927 - INFO - 🔍 Precomputing token indices and corresponding relevance_values for each label... - [learning2rank.py:1321:preprocess_data]
117
+ 2025-06-12 20:53:24,155 - INFO - 📊 Lookup table dimensions: 32768 vocabulary size × 7942 labels - [learning2rank.py:1328:preprocess_data]
118
+ 2025-06-12 20:53:24,155 - INFO - ⚡ This approach eliminates token comparison broadcasting and provides O(1) lookup time for relevance scores! - [learning2rank.py:1331:preprocess_data]
119
+ 2025-06-12 20:53:24,155 - INFO - 🧮 Processing relevance calculations vectorized for maximum speed 🔥 - [learning2rank.py:1334:preprocess_data]
120
+ 2025-06-12 20:53:24,311 - INFO - 🔍 Verifying token mappings with 10 samples... - [learning2rank.py:1367:verify_token_mappings]
121
+ 2025-06-12 20:53:24,473 - INFO - ✅ Token mappings verification completed successfully! 🎉 - [learning2rank.py:1458:verify_token_mappings]
122
+ 2025-06-12 20:53:24,477 - INFO - 🔄 Processing dataset with map... - [learning2rank.py:1522:preprocess_data]
123
+ 2025-06-12 20:53:24,801 - INFO - ✅ Dataset built in 0h 0m 0.32s - [learning2rank.py:1545:preprocess_data]
124
+ 2025-06-12 20:53:24,813 - INFO - The size of Training set: 173 🏋️ (Training Data) - [learning2rank.py:1576:preprocess_data]
125
+ 2025-06-12 20:53:24,813 - INFO - The size of Evaluation set: 35 🔍 (Test Data) - [learning2rank.py:1577:preprocess_data]
126
+ 2025-06-12 20:53:24,813 - INFO - 🚀 Created HuggingFace Dataset with 208 samples, 7942 labels - [learning2rank.py:1585:preprocess_data]
127
+ 2025-06-12 20:53:24,814 - INFO - 🏷️ Number of unique ICD-10 codes: 7942 - [learning2rank.py:3916:main]
128
+ 2025-06-12 20:53:24,814 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:109:log_section]
129
+ 2025-06-12 20:53:24,814 - INFO - + ✨ MODEL INITIALIZATION + - [learning2rank.py:110:log_section]
130
+ 2025-06-12 20:53:24,814 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:113:log_section]
131
+ 2025-06-12 20:53:24,814 - INFO - 🧠 Initializing custom L2R model for outputting per-token relevance scores per ICD-10 codes. - [learning2rank.py:3919:main]
132
+ 2025-06-12 20:53:26,376 - INFO - 🔧 Registering LTRModel with transformers AutoModel 🚀 - [learning2rank.py:1725:define_model]
133
+ 2025-06-12 20:53:26,377 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:109:log_section]
134
+ 2025-06-12 20:53:26,377 - INFO - + ✨ TRAINING PREPARATION + - [learning2rank.py:110:log_section]
135
+ 2025-06-12 20:53:26,377 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:113:log_section]
136
+ 2025-06-12 20:53:26,377 - INFO - ⚙️ Preparing training components and optimizers... - [learning2rank.py:3926:main]
137
+ 2025-06-12 20:53:26,409 - INFO - 🖥️ Device: NVIDIA GH200 480GB - [learning2rank.py:1887:log_training_configuration]
138
+ 2025-06-12 20:53:26,409 - INFO - 🔋 CUDA Available: True - [learning2rank.py:1890:log_training_configuration]
139
+ 2025-06-12 20:53:26,409 - INFO - 💾 CUDA Device Count: 1 - [learning2rank.py:1891:log_training_configuration]
140
+ 2025-06-12 20:53:26,410 - INFO -
141
+ 📋 Training Configuration 📋
142
+ +----------+-----------------------------+--------------------------------------------------+
143
+ | 🌟 Emoji | 🏷️ Parameter | 📊 Value |
144
+ +----------+-----------------------------+--------------------------------------------------+
145
+ | 📁 | Output Directory | ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b |
146
+ | 🔁 | Training Epochs | 5 |
147
+ | 🏋️ | Train Batch Size | 1 |
148
+ | 🔍 | Eval Batch Size | 1 |
149
+ | 📊 | Gradient Accumulation Steps | 4 |
150
+ | 🚀 | Learning Rate | 0.0001 |
151
+ | 🌅 | Warmup Steps | 0 |
152
+ | 💾 | Save Strategy | epoch |
153
+ | 💾 | Save Total Limit | 10 |
154
+ | 📊 | Evaluation Strategy | epoch |
155
+ | 🎯 | Best Model Metric | ndcg@25 |
156
+ | 📝 | Logging Strategy | steps (every 10 steps) |
157
+ | 🌐 | Push to Hub | True |
158
+ | 🌐 | Hub Model ID | deb101/mistral-7b-instruct-v0.3-mimic4-adapt-l2r |
159
+ | 🔢 | Steps per Epoch | 43 |
160
+ | 🔢 | Total Training Steps | 215 |
161
+ | 🔢 | Evaluation Steps | 35 |
162
+ | 📊 | Training Dataset Size | 173 samples 🏋️ |
163
+ | 📊 | Evaluation Dataset Size | 35 samples 🔍 |
164
+ +----------+-----------------------------+--------------------------------------------------+ - [learning2rank.py:1879:log_training_args]
165
+ 2025-06-12 20:53:26,410 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:109:log_section]
166
+ 2025-06-12 20:53:26,410 - INFO - + ✨ MODEL TRAINING + - [learning2rank.py:110:log_section]
167
+ 2025-06-12 20:53:26,410 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:113:log_section]
168
+ 2025-06-12 20:53:26,410 - INFO - 🏋️ Starting model training process... - [learning2rank.py:3946:main]
169
+ 2025-06-12 20:53:26,410 - INFO - 🏁 Preparing Custom Trainer 🛠️ - [learning2rank.py:3084:train_model]
170
+ 2025-06-12 20:53:26,454 - INFO - We are registering the tokenizer mistralai/Mistral-7B-Instruct-v0.3 in Custom Trainer - [learning2rank.py:2519:__init__]
171
+ 2025-06-12 20:53:26,454 - INFO - 🏋️ Commencing Model Training 💪 - [learning2rank.py:3125:train_model]
172
+ 2025-06-12 20:53:26,720 - INFO - 🚀 Starting Training... - [learning2rank.py:2269:on_train_begin]
173
+ 2025-06-12 20:53:45,702 - INFO -
174
+ 🚂 Training Metrics (Step 10) 🚂
175
+ +---------------+-------------+
176
+ | Metric | Value |
177
+ +===============+=============+
178
+ | loss | -4.7327e+16 |
179
+ +---------------+-------------+
180
+ | grad_norm | nan |
181
+ +---------------+-------------+
182
+ | learning_rate | 0.0001 |
183
+ +---------------+-------------+
184
+ | epoch | 0.231214 |
185
+ +---------------+-------------+ - [learning2rank.py:2187:on_log]
186
+ 2025-06-12 20:53:59,752 - INFO -
187
+ 🚂 Training Metrics (Step 20) 🚂
188
+ +---------------+--------------+
189
+ | Metric | Value |
190
+ +===============+==============+
191
+ | loss | -3.66103e+17 |
192
+ +---------------+--------------+
193
+ | grad_norm | nan |
194
+ +---------------+--------------+
195
+ | learning_rate | 0.0001 |
196
+ +---------------+--------------+
197
+ | epoch | 0.462428 |
198
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
199
+ 2025-06-12 20:54:13,802 - INFO -
200
+ 🚂 Training Metrics (Step 30) 🚂
201
+ +---------------+--------------+
202
+ | Metric | Value |
203
+ +===============+==============+
204
+ | loss | -6.47362e+16 |
205
+ +---------------+--------------+
206
+ | grad_norm | nan |
207
+ +---------------+--------------+
208
+ | learning_rate | 0.0001 |
209
+ +---------------+--------------+
210
+ | epoch | 0.693642 |
211
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
212
+ 2025-06-12 20:54:27,786 - INFO -
213
+ 🚂 Training Metrics (Step 40) 🚂
214
+ +---------------+--------------+
215
+ | Metric | Value |
216
+ +===============+==============+
217
+ | loss | -8.20245e+17 |
218
+ +---------------+--------------+
219
+ | grad_norm | nan |
220
+ +---------------+--------------+
221
+ | learning_rate | 0.0001 |
222
+ +---------------+--------------+
223
+ | epoch | 0.924855 |
224
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
225
+ 2025-06-12 20:54:32,360 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [learning2rank.py:2635:evaluate]
226
+ 2025-06-12 20:57:48,058 - WARNING - No valid samples found for metric 'precision@25'. - [learning2rank.py:2739:evaluate]
227
+ 2025-06-12 20:57:48,058 - INFO -
228
+ 🔍 Evaluation Metrics 🔍
229
+ +-------------------+--------------+
230
+ | Metric | Value |
231
+ +===================+==============+
232
+ | eval_loss | -1.38641e+17 |
233
+ +-------------------+--------------+
234
+ | eval_ndcg | 0.955604 |
235
+ +-------------------+--------------+
236
+ | eval_ndcg@25 | 0.196457 |
237
+ +-------------------+--------------+
238
+ | eval_precision@25 | 0 |
239
+ +-------------------+--------------+ - [learning2rank.py:2206:on_evaluate]
240
+ 2025-06-12 20:57:51,683 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-44 - [learning2rank.py:2759:_save]
241
+ 2025-06-12 20:57:51,711 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-44 - [learning2rank.py:2764:_save]
242
+ 2025-06-12 20:57:51,712 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-44:
243
+ +---------+--------------------+------------+
244
+ | Index | Saved File | Size |
245
+ +=========+====================+============+
246
+ | 1 | training_args.bin | 0.01 MB |
247
+ +---------+--------------------+------------+
248
+ | 2 | optimizer.pt | 0.00 MB |
249
+ +---------+--------------------+------------+
250
+ | 3 | model.safetensors | 4122.74 MB |
251
+ +---------+--------------------+------------+
252
+ | 4 | scaler.pt | 0.00 MB |
253
+ +---------+--------------------+------------+
254
+ | 5 | config.json | 0.38 MB |
255
+ +---------+--------------------+------------+
256
+ | 6 | scheduler.pt | 0.00 MB |
257
+ +---------+--------------------+------------+
258
+ | 7 | trainer_state.json | 0.00 MB |
259
+ +---------+--------------------+------------+
260
+ | 8 | rng_state.pth | 0.01 MB |
261
+ +---------+--------------------+------------+ - [learning2rank.py:2785:_save]
262
+ 2025-06-12 20:58:04,954 - INFO -
263
+ 🚂 Training Metrics (Step 50) 🚂
264
+ +---------------+--------------+
265
+ | Metric | Value |
266
+ +===============+==============+
267
+ | loss | -2.51752e+18 |
268
+ +---------------+--------------+
269
+ | grad_norm | nan |
270
+ +---------------+--------------+
271
+ | learning_rate | 0.0001 |
272
+ +---------------+--------------+
273
+ | epoch | 1.13873 |
274
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
275
+ 2025-06-12 20:58:19,125 - INFO -
276
+ 🚂 Training Metrics (Step 60) 🚂
277
+ +---------------+--------------+
278
+ | Metric | Value |
279
+ +===============+==============+
280
+ | loss | -1.03164e+17 |
281
+ +---------------+--------------+
282
+ | grad_norm | nan |
283
+ +---------------+--------------+
284
+ | learning_rate | 9.9e-05 |
285
+ +---------------+--------------+
286
+ | epoch | 1.36994 |
287
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
288
+ 2025-06-12 20:58:33,206 - INFO -
289
+ 🚂 Training Metrics (Step 70) 🚂
290
+ +---------------+--------------+
291
+ | Metric | Value |
292
+ +===============+==============+
293
+ | loss | -6.56896e+18 |
294
+ +---------------+--------------+
295
+ | grad_norm | 1.68516e+18 |
296
+ +---------------+--------------+
297
+ | learning_rate | 9.5e-05 |
298
+ +---------------+--------------+
299
+ | epoch | 1.60116 |
300
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
301
+ 2025-06-12 20:58:47,239 - INFO -
302
+ 🚂 Training Metrics (Step 80) 🚂
303
+ +---------------+--------------+
304
+ | Metric | Value |
305
+ +===============+==============+
306
+ | loss | -9.21762e+16 |
307
+ +---------------+--------------+
308
+ | grad_norm | 1.31801e+18 |
309
+ +---------------+--------------+
310
+ | learning_rate | 9.1e-05 |
311
+ +---------------+--------------+
312
+ | epoch | 1.83237 |
313
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
314
+ 2025-06-12 20:58:57,383 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [learning2rank.py:2635:evaluate]
315
+ 2025-06-12 21:02:12,488 - INFO -
316
+ 🔍 Evaluation Metrics 🔍
317
+ +-------------------+--------------+
318
+ | Metric | Value |
319
+ +===================+==============+
320
+ | eval_loss | -3.25692e+17 |
321
+ +-------------------+--------------+
322
+ | eval_ndcg | 0.956481 |
323
+ +-------------------+--------------+
324
+ | eval_ndcg@25 | 0.64998 |
325
+ +-------------------+--------------+
326
+ | eval_precision@25 | 0.545143 |
327
+ +-------------------+--------------+ - [learning2rank.py:2206:on_evaluate]
328
+ 2025-06-12 21:02:14,368 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-88 - [learning2rank.py:2759:_save]
329
+ 2025-06-12 21:02:14,395 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-88 - [learning2rank.py:2764:_save]
330
+ 2025-06-12 21:02:14,397 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-88:
331
+ +---------+--------------------+------------+
332
+ | Index | Saved File | Size |
333
+ +=========+====================+============+
334
+ | 1 | training_args.bin | 0.01 MB |
335
+ +---------+--------------------+------------+
336
+ | 2 | optimizer.pt | 352.39 MB |
337
+ +---------+--------------------+------------+
338
+ | 3 | model.safetensors | 4122.74 MB |
339
+ +---------+--------------------+------------+
340
+ | 4 | scaler.pt | 0.00 MB |
341
+ +---------+--------------------+------------+
342
+ | 5 | config.json | 0.38 MB |
343
+ +---------+--------------------+------------+
344
+ | 6 | scheduler.pt | 0.00 MB |
345
+ +---------+--------------------+------------+
346
+ | 7 | trainer_state.json | 0.00 MB |
347
+ +---------+--------------------+------------+
348
+ | 8 | rng_state.pth | 0.01 MB |
349
+ +---------+--------------------+------------+ - [learning2rank.py:2785:_save]
350
+ 2025-06-12 21:02:22,258 - INFO -
351
+ 🚂 Training Metrics (Step 90) 🚂
352
+ +---------------+--------------+
353
+ | Metric | Value |
354
+ +===============+==============+
355
+ | loss | -9.98789e+17 |
356
+ +---------------+--------------+
357
+ | grad_norm | 3.42546e+17 |
358
+ +---------------+--------------+
359
+ | learning_rate | 8.7e-05 |
360
+ +---------------+--------------+
361
+ | epoch | 2.04624 |
362
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
363
+ 2025-06-12 21:02:36,292 - INFO -
364
+ 🚂 Training Metrics (Step 100) 🚂
365
+ +---------------+--------------+
366
+ | Metric | Value |
367
+ +===============+==============+
368
+ | loss | -2.14719e+17 |
369
+ +---------------+--------------+
370
+ | grad_norm | 6.88191e+17 |
371
+ +---------------+--------------+
372
+ | learning_rate | 8.2e-05 |
373
+ +---------------+--------------+
374
+ | epoch | 2.27746 |
375
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
376
+ 2025-06-12 21:02:50,369 - INFO -
377
+ 🚂 Training Metrics (Step 110) 🚂
378
+ +---------------+--------------+
379
+ | Metric | Value |
380
+ +===============+==============+
381
+ | loss | -4.48427e+18 |
382
+ +---------------+--------------+
383
+ | grad_norm | nan |
384
+ +---------------+--------------+
385
+ | learning_rate | 7.8e-05 |
386
+ +---------------+--------------+
387
+ | epoch | 2.50867 |
388
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
389
+ 2025-06-12 21:03:04,458 - INFO -
390
+ 🚂 Training Metrics (Step 120) 🚂
391
+ +---------------+--------------+
392
+ | Metric | Value |
393
+ +===============+==============+
394
+ | loss | -4.49499e+17 |
395
+ +---------------+--------------+
396
+ | grad_norm | 1.00208e+18 |
397
+ +---------------+--------------+
398
+ | learning_rate | 7.3e-05 |
399
+ +---------------+--------------+
400
+ | epoch | 2.73988 |
401
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
402
+ 2025-06-12 21:03:18,518 - INFO -
403
+ 🚂 Training Metrics (Step 130) 🚂
404
+ +---------------+--------------+
405
+ | Metric | Value |
406
+ +===============+==============+
407
+ | loss | -1.38663e+18 |
408
+ +---------------+--------------+
409
+ | grad_norm | 2.64355e+17 |
410
+ +---------------+--------------+
411
+ | learning_rate | 6.9e-05 |
412
+ +---------------+--------------+
413
+ | epoch | 2.9711 |
414
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
415
+ 2025-06-12 21:03:20,282 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [learning2rank.py:2635:evaluate]
416
+ 2025-06-12 21:06:35,391 - INFO -
417
+ 🔍 Evaluation Metrics 🔍
418
+ +-------------------+-------------+
419
+ | Metric | Value |
420
+ +===================+=============+
421
+ | eval_loss | -4.3741e+17 |
422
+ +-------------------+-------------+
423
+ | eval_ndcg | 0.956946 |
424
+ +-------------------+-------------+
425
+ | eval_ndcg@25 | 0.787928 |
426
+ +-------------------+-------------+
427
+ | eval_precision@25 | 0.908571 |
428
+ +-------------------+-------------+ - [learning2rank.py:2206:on_evaluate]
429
+ 2025-06-12 21:06:37,242 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-132 - [learning2rank.py:2759:_save]
430
+ 2025-06-12 21:06:37,269 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-132 - [learning2rank.py:2764:_save]
431
+ 2025-06-12 21:06:37,270 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-132:
432
+ +---------+--------------------+------------+
433
+ | Index | Saved File | Size |
434
+ +=========+====================+============+
435
+ | 1 | training_args.bin | 0.01 MB |
436
+ +---------+--------------------+------------+
437
+ | 2 | optimizer.pt | 352.39 MB |
438
+ +---------+--------------------+------------+
439
+ | 3 | model.safetensors | 4122.74 MB |
440
+ +---------+--------------------+------------+
441
+ | 4 | scaler.pt | 0.00 MB |
442
+ +---------+--------------------+------------+
443
+ | 5 | config.json | 0.38 MB |
444
+ +---------+--------------------+------------+
445
+ | 6 | scheduler.pt | 0.00 MB |
446
+ +---------+--------------------+------------+
447
+ | 7 | trainer_state.json | 0.00 MB |
448
+ +---------+--------------------+------------+
449
+ | 8 | rng_state.pth | 0.01 MB |
450
+ +---------+--------------------+------------+ - [learning2rank.py:2785:_save]
451
+ 2025-06-12 21:06:53,565 - INFO -
452
+ 🚂 Training Metrics (Step 140) 🚂
453
+ +---------------+--------------+
454
+ | Metric | Value |
455
+ +===============+==============+
456
+ | loss | -1.09894e+19 |
457
+ +---------------+--------------+
458
+ | grad_norm | 1.94154e+17 |
459
+ +---------------+--------------+
460
+ | learning_rate | 6.5e-05 |
461
+ +---------------+--------------+
462
+ | epoch | 3.18497 |
463
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
464
+ 2025-06-12 21:07:07,672 - INFO -
465
+ 🚂 Training Metrics (Step 150) 🚂
466
+ +---------------+--------------+
467
+ | Metric | Value |
468
+ +===============+==============+
469
+ | loss | -6.84143e+18 |
470
+ +---------------+--------------+
471
+ | grad_norm | 2.59061e+17 |
472
+ +---------------+--------------+
473
+ | learning_rate | 6.1e-05 |
474
+ +---------------+--------------+
475
+ | epoch | 3.41619 |
476
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
477
+ 2025-06-12 21:07:21,752 - INFO -
478
+ 🚂 Training Metrics (Step 160) 🚂
479
+ +---------------+--------------+
480
+ | Metric | Value |
481
+ +===============+==============+
482
+ | loss | -4.95167e+17 |
483
+ +---------------+--------------+
484
+ | grad_norm | 2.42175e+17 |
485
+ +---------------+--------------+
486
+ | learning_rate | 5.6e-05 |
487
+ +---------------+--------------+
488
+ | epoch | 3.6474 |
489
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
490
+ 2025-06-12 21:07:35,824 - INFO -
491
+ 🚂 Training Metrics (Step 170) 🚂
492
+ +---------------+--------------+
493
+ | Metric | Value |
494
+ +===============+==============+
495
+ | loss | -3.52079e+17 |
496
+ +---------------+--------------+
497
+ | grad_norm | 2.65202e+17 |
498
+ +---------------+--------------+
499
+ | learning_rate | 5.2e-05 |
500
+ +---------------+--------------+
501
+ | epoch | 3.87861 |
502
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
503
+ 2025-06-12 21:07:43,199 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [learning2rank.py:2635:evaluate]
504
+ 2025-06-12 21:10:58,474 - INFO -
505
+ 🔍 Evaluation Metrics 🔍
506
+ +-------------------+--------------+
507
+ | Metric | Value |
508
+ +===================+==============+
509
+ | eval_loss | -4.41614e+17 |
510
+ +-------------------+--------------+
511
+ | eval_ndcg | 0.957167 |
512
+ +-------------------+--------------+
513
+ | eval_ndcg@25 | 0.842648 |
514
+ +-------------------+--------------+
515
+ | eval_precision@25 | 0.914286 |
516
+ +-------------------+--------------+ - [learning2rank.py:2206:on_evaluate]
517
+ 2025-06-12 21:11:01,532 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-176 - [learning2rank.py:2759:_save]
518
+ 2025-06-12 21:11:01,560 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-176 - [learning2rank.py:2764:_save]
519
+ 2025-06-12 21:11:01,561 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-176:
520
+ +---------+--------------------+------------+
521
+ | Index | Saved File | Size |
522
+ +=========+====================+============+
523
+ | 1 | training_args.bin | 0.01 MB |
524
+ +---------+--------------------+------------+
525
+ | 2 | optimizer.pt | 352.39 MB |
526
+ +---------+--------------------+------------+
527
+ | 3 | model.safetensors | 4122.74 MB |
528
+ +---------+--------------------+------------+
529
+ | 4 | scaler.pt | 0.00 MB |
530
+ +---------+--------------------+------------+
531
+ | 5 | config.json | 0.38 MB |
532
+ +---------+--------------------+------------+
533
+ | 6 | scheduler.pt | 0.00 MB |
534
+ +---------+--------------------+------------+
535
+ | 7 | trainer_state.json | 0.00 MB |
536
+ +---------+--------------------+------------+
537
+ | 8 | rng_state.pth | 0.01 MB |
538
+ +---------+--------------------+------------+ - [learning2rank.py:2785:_save]
539
+ 2025-06-12 21:11:12,194 - INFO -
540
+ 🚂 Training Metrics (Step 180) 🚂
541
+ +---------------+--------------+
542
+ | Metric | Value |
543
+ +===============+==============+
544
+ | loss | -1.75799e+18 |
545
+ +---------------+--------------+
546
+ | grad_norm | 4.09451e+16 |
547
+ +---------------+--------------+
548
+ | learning_rate | 4.7e-05 |
549
+ +---------------+--------------+
550
+ | epoch | 4.09249 |
551
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
552
+ 2025-06-12 21:11:26,251 - INFO -
553
+ 🚂 Training Metrics (Step 190) 🚂
554
+ +---------------+--------------+
555
+ | Metric | Value |
556
+ +===============+==============+
557
+ | loss | -1.50435e+18 |
558
+ +---------------+--------------+
559
+ | grad_norm | 9.34133e+17 |
560
+ +---------------+--------------+
561
+ | learning_rate | 4.2e-05 |
562
+ +---------------+--------------+
563
+ | epoch | 4.3237 |
564
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
565
+ 2025-06-12 21:11:40,349 - INFO -
566
+ 🚂 Training Metrics (Step 200) 🚂
567
+ +---------------+--------------+
568
+ | Metric | Value |
569
+ +===============+==============+
570
+ | loss | -7.57569e+17 |
571
+ +---------------+--------------+
572
+ | grad_norm | 2.45933e+17 |
573
+ +---------------+--------------+
574
+ | learning_rate | 3.8e-05 |
575
+ +---------------+--------------+
576
+ | epoch | 4.55491 |
577
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
578
+ 2025-06-12 21:11:54,405 - INFO -
579
+ 🚂 Training Metrics (Step 210) 🚂
580
+ +---------------+--------------+
581
+ | Metric | Value |
582
+ +===============+==============+
583
+ | loss | -4.55401e+18 |
584
+ +---------------+--------------+
585
+ | grad_norm | 1.51256e+17 |
586
+ +---------------+--------------+
587
+ | learning_rate | 3.3e-05 |
588
+ +---------------+--------------+
589
+ | epoch | 4.78613 |
590
+ +---------------+--------------+ - [learning2rank.py:2187:on_log]
591
+ 2025-06-12 21:12:03,394 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-215 - [learning2rank.py:2759:_save]
592
+ 2025-06-12 21:12:03,424 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-215 - [learning2rank.py:2764:_save]
593
+ 2025-06-12 21:12:03,425 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-215:
594
+ +---------+--------------------+------------+
595
+ | Index | Saved File | Size |
596
+ +=========+====================+============+
597
+ | 1 | training_args.bin | 0.01 MB |
598
+ +---------+--------------------+------------+
599
+ | 2 | optimizer.pt | 352.39 MB |
600
+ +---------+--------------------+------------+
601
+ | 3 | model.safetensors | 4122.74 MB |
602
+ +---------+--------------------+------------+
603
+ | 4 | scaler.pt | 0.00 MB |
604
+ +---------+--------------------+------------+
605
+ | 5 | config.json | 0.38 MB |
606
+ +---------+--------------------+------------+
607
+ | 6 | scheduler.pt | 0.00 MB |
608
+ +---------+--------------------+------------+
609
+ | 7 | trainer_state.json | 0.01 MB |
610
+ +---------+--------------------+------------+
611
+ | 8 | rng_state.pth | 0.01 MB |
612
+ +---------+--------------------+------------+ - [learning2rank.py:2785:_save]
613
+ 2025-06-12 21:12:06,059 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [learning2rank.py:2635:evaluate]
614
+ 2025-06-12 21:15:21,367 - INFO -
615
+ 🔍 Evaluation Metrics 🔍
616
+ +-------------------+--------------+
617
+ | Metric | Value |
618
+ +===================+==============+
619
+ | eval_loss | -4.43631e+17 |
620
+ +-------------------+--------------+
621
+ | eval_ndcg | 0.95719 |
622
+ +-------------------+--------------+
623
+ | eval_ndcg@25 | 0.832013 |
624
+ +-------------------+--------------+
625
+ | eval_precision@25 | 0.913143 |
626
+ +-------------------+--------------+ - [learning2rank.py:2206:on_evaluate]
627
+ 2025-06-12 21:15:24,175 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-215 - [learning2rank.py:2759:_save]
628
+ 2025-06-12 21:15:24,203 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-215 - [learning2rank.py:2764:_save]
629
+ 2025-06-12 21:15:24,205 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-215:
630
+ +---------+--------------------+------------+
631
+ | Index | Saved File | Size |
632
+ +=========+====================+============+
633
+ | 1 | training_args.bin | 0.01 MB |
634
+ +---------+--------------------+------------+
635
+ | 2 | optimizer.pt | 352.39 MB |
636
+ +---------+--------------------+------------+
637
+ | 3 | model.safetensors | 4122.74 MB |
638
+ +---------+--------------------+------------+
639
+ | 4 | scaler.pt | 0.00 MB |
640
+ +---------+--------------------+------------+
641
+ | 5 | config.json | 0.38 MB |
642
+ +---------+--------------------+------------+
643
+ | 6 | scheduler.pt | 0.00 MB |
644
+ +---------+--------------------+------------+
645
+ | 7 | trainer_state.json | 0.01 MB |
646
+ +---------+--------------------+------------+
647
+ | 8 | rng_state.pth | 0.01 MB |
648
+ +---------+--------------------+------------+ - [learning2rank.py:2785:_save]
649
+ 2025-06-12 21:15:24,455 - INFO - 📂 Loading best model from ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-176 - [learning2rank.py:2841:_load_best_model]
650
+ 2025-06-12 21:15:24,455 - INFO - 🖥️ Model is on device: cuda:0 - [learning2rank.py:2851:_load_best_model]
651
+ 2025-06-12 21:15:24,518 - INFO - 🔑 Key order comparison:
652
+ +---------+--------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+
653
+ | Index | Saved state_dict Keys | Model state_dict Keys |
654
+ +=========+==========================================================================+===========================================================================================+
655
+ | 1 | ground_model.base_model.model.lm_head.weight | label_embeddings.weight |
656
+ +---------+--------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+
657
+ | 2 | ground_model.base_model.model.model.embed_tokens.weight | ground_model.base_model.model.model.embed_tokens.weight |
658
+ +---------+--------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+
659
+ | 3 | ground_model.base_model.model.model.layers.0.input_layernorm.weight | ground_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight |
660
+ +---------+--------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+
661
+ | 4 | ground_model.base_model.model.model.layers.0.mlp.down_proj.weight | ground_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight.absmax |
662
+ +---------+--------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+
663
+ | 5 | ground_model.base_model.model.model.layers.0.mlp.down_proj.weight.absmax | ground_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight.quant_map |
664
+ +---------+--------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+ - [learning2rank.py:2875:_load_best_model]
665
+ 2025-06-12 21:15:25,345 - INFO - ✅ Loaded best model weights from ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/checkpoint-176/model.safetensors - [learning2rank.py:2892:_load_best_model]
666
+ 2025-06-12 21:15:25,385 - INFO - ✔️ Weight for label_embeddings.weight matches between saved and loaded state_dict - [learning2rank.py:2904:_load_best_model]
667
+ 2025-06-12 21:15:25,441 - INFO - ✔️ Weight for ground_model.base_model.model.model.embed_tokens.weight matches between saved and loaded state_dict - [learning2rank.py:2904:_load_best_model]
668
+ 2025-06-12 21:15:25,459 - INFO -
669
+ 🚂 Training Metrics (Step 215) 🚂
670
+ +--------------------------+--------------+
671
+ | Metric | Value |
672
+ +==========================+==============+
673
+ | train_runtime | 1318.74 |
674
+ +--------------------------+--------------+
675
+ | train_samples_per_second | 0.656 |
676
+ +--------------------------+--------------+
677
+ | train_steps_per_second | 0.163 |
678
+ +--------------------------+--------------+
679
+ | total_flos | 9.47661e+15 |
680
+ +--------------------------+--------------+
681
+ | train_loss | -2.42904e+18 |
682
+ +--------------------------+--------------+
683
+ | epoch | 4.90173 |
684
+ +--------------------------+--------------+ - [learning2rank.py:2187:on_log]
685
+ 2025-06-12 21:15:25,459 - INFO - ✨ Training Completed! ✨ - [learning2rank.py:2336:on_train_end]
686
+ 2025-06-12 21:15:25,535 - INFO - 📊 Training loss plot saved as '../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/train_loss_plot.png' - [learning2rank.py:2439:on_train_end]
687
+ 2025-06-12 21:15:25,598 - INFO - 📊 Evaluation loss plot saved as '../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/eval_loss_plot.png' - [learning2rank.py:2453:on_train_end]
688
+ 2025-06-12 21:15:25,660 - INFO - 📊 Evaluation metric plot saved as '../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/eval_ndcg@25_plot.png' - [learning2rank.py:2474:on_train_end]
689
+ 2025-06-12 21:15:25,660 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:109:log_section]
690
+ 2025-06-12 21:15:25,661 - INFO - + ✨ MODEL SAVING + - [learning2rank.py:110:log_section]
691
+ 2025-06-12 21:15:25,661 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [learning2rank.py:113:log_section]
692
+ 2025-06-12 21:15:25,661 - INFO - 💾 Saving trained model and pushing to Hugging Face Hub... - [learning2rank.py:3961:main]
693
+ 2025-06-12 21:15:25,661 - INFO - 📁 Creating/using output directory: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [learning2rank.py:3159:save_and_push]
694
+ 2025-06-12 21:15:29,152 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [learning2rank.py:2759:_save]
695
+ 2025-06-12 21:15:29,180 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [learning2rank.py:2764:_save]
696
+ 2025-06-12 21:15:29,183 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b:
697
+ +---------+------------------------------------------+------------+
698
+ | Index | Saved File | Size |
699
+ +=========+==========================================+============+
700
+ | 1 | eval_loss_plot.png | 0.03 MB |
701
+ +---------+------------------------------------------+------------+
702
+ | 2 | training_args.bin | 0.01 MB |
703
+ +---------+------------------------------------------+------------+
704
+ | 3 | tokenizer.model | 0.56 MB |
705
+ +---------+------------------------------------------+------------+
706
+ | 4 | tokenizer.json | 3.50 MB |
707
+ +---------+------------------------------------------+------------+
708
+ | 5 | model.safetensors | 4122.74 MB |
709
+ +---------+------------------------------------------+------------+
710
+ | 6 | eval_ndcg@25_plot.png | 0.03 MB |
711
+ +---------+------------------------------------------+------------+
712
+ | 7 | config.json | 0.38 MB |
713
+ +---------+------------------------------------------+------------+
714
+ | 8 | special_tokens_map.json | 0.00 MB |
715
+ +---------+------------------------------------------+------------+
716
+ | 9 | lookup_table.pt | 992.75 MB |
717
+ +---------+------------------------------------------+------------+
718
+ | 10 | tokenizer_config.json | 0.13 MB |
719
+ +---------+------------------------------------------+------------+
720
+ | 11 | train_loss_plot.png | 0.04 MB |
721
+ +---------+------------------------------------------+------------+
722
+ | 12 | training_l2r_log_2025-06-12_16-06-10.log | 0.05 MB |
723
+ +---------+------------------------------------------+------------+
724
+ | 13 | README.md | 0.00 MB |
725
+ +---------+------------------------------------------+------------+
726
+ | 14 | checkpoint-44/training_args.bin | 0.01 MB |
727
+ +---------+------------------------------------------+------------+
728
+ | 15 | checkpoint-44/optimizer.pt | 0.00 MB |
729
+ +---------+------------------------------------------+------------+
730
+ | 16 | checkpoint-44/model.safetensors | 4122.74 MB |
731
+ +---------+------------------------------------------+------------+
732
+ | 17 | checkpoint-44/scaler.pt | 0.00 MB |
733
+ +---------+------------------------------------------+------------+
734
+ | 18 | checkpoint-44/config.json | 0.38 MB |
735
+ +---------+------------------------------------------+------------+
736
+ | 19 | checkpoint-44/scheduler.pt | 0.00 MB |
737
+ +---------+------------------------------------------+------------+
738
+ | 20 | checkpoint-44/trainer_state.json | 0.00 MB |
739
+ +---------+------------------------------------------+------------+
740
+ | 21 | checkpoint-44/rng_state.pth | 0.01 MB |
741
+ +---------+------------------------------------------+------------+
742
+ | 22 | ground_model/adapter_config.json | 0.00 MB |
743
+ +---------+------------------------------------------+------------+
744
+ | 23 | ground_model/adapter_model.safetensors | 52.03 MB |
745
+ +---------+------------------------------------------+------------+
746
+ | 24 | ground_model/README.md | 0.00 MB |
747
+ +---------+------------------------------------------+------------+
748
+ | 25 | checkpoint-176/training_args.bin | 0.01 MB |
749
+ +---------+------------------------------------------+------------+
750
+ | 26 | checkpoint-176/optimizer.pt | 352.39 MB |
751
+ +---------+------------------------------------------+------------+
752
+ | 27 | checkpoint-176/model.safetensors | 4122.74 MB |
753
+ +---------+------------------------------------------+------------+
754
+ | 28 | checkpoint-176/scaler.pt | 0.00 MB |
755
+ +---------+------------------------------------------+------------+
756
+ | 29 | checkpoint-176/config.json | 0.38 MB |
757
+ +---------+------------------------------------------+------------+
758
+ | 30 | checkpoint-176/scheduler.pt | 0.00 MB |
759
+ +---------+------------------------------------------+------------+
760
+ | 31 | checkpoint-176/trainer_state.json | 0.00 MB |
761
+ +---------+------------------------------------------+------------+
762
+ | 32 | checkpoint-176/rng_state.pth | 0.01 MB |
763
+ +---------+------------------------------------------+------------+
764
+ | 33 | checkpoint-215/training_args.bin | 0.01 MB |
765
+ +---------+------------------------------------------+------------+
766
+ | 34 | checkpoint-215/optimizer.pt | 352.39 MB |
767
+ +---------+------------------------------------------+------------+
768
+ | 35 | checkpoint-215/model.safetensors | 4122.74 MB |
769
+ +---------+------------------------------------------+------------+
770
+ | 36 | checkpoint-215/scaler.pt | 0.00 MB |
771
+ +---------+------------------------------------------+------------+
772
+ | 37 | checkpoint-215/config.json | 0.38 MB |
773
+ +---------+------------------------------------------+------------+
774
+ | 38 | checkpoint-215/scheduler.pt | 0.00 MB |
775
+ +---------+------------------------------------------+------------+
776
+ | 39 | checkpoint-215/trainer_state.json | 0.01 MB |
777
+ +---------+------------------------------------------+------------+
778
+ | 40 | checkpoint-215/rng_state.pth | 0.01 MB |
779
+ +---------+------------------------------------------+------------+
780
+ | 41 | checkpoint-88/training_args.bin | 0.01 MB |
781
+ +---------+------------------------------------------+------------+
782
+ | 42 | checkpoint-88/optimizer.pt | 352.39 MB |
783
+ +---------+------------------------------------------+------------+
784
+ | 43 | checkpoint-88/model.safetensors | 4122.74 MB |
785
+ +---------+------------------------------------------+------------+
786
+ | 44 | checkpoint-88/scaler.pt | 0.00 MB |
787
+ +---------+------------------------------------------+------------+
788
+ | 45 | checkpoint-88/config.json | 0.38 MB |
789
+ +---------+------------------------------------------+------------+
790
+ | 46 | checkpoint-88/scheduler.pt | 0.00 MB |
791
+ +---------+------------------------------------------+------------+
792
+ | 47 | checkpoint-88/trainer_state.json | 0.00 MB |
793
+ +---------+------------------------------------------+------------+
794
+ | 48 | checkpoint-88/rng_state.pth | 0.01 MB |
795
+ +---------+------------------------------------------+------------+
796
+ | 49 | checkpoint-132/training_args.bin | 0.01 MB |
797
+ +---------+------------------------------------------+------------+
798
+ | 50 | checkpoint-132/optimizer.pt | 352.39 MB |
799
+ +---------+------------------------------------------+------------+
800
+ | 51 | checkpoint-132/model.safetensors | 4122.74 MB |
801
+ +---------+------------------------------------------+------------+
802
+ | 52 | checkpoint-132/scaler.pt | 0.00 MB |
803
+ +---------+------------------------------------------+------------+
804
+ | 53 | checkpoint-132/config.json | 0.38 MB |
805
+ +---------+------------------------------------------+------------+
806
+ | 54 | checkpoint-132/scheduler.pt | 0.00 MB |
807
+ +---------+------------------------------------------+------------+
808
+ | 55 | checkpoint-132/trainer_state.json | 0.00 MB |
809
+ +---------+------------------------------------------+------------+
810
+ | 56 | checkpoint-132/rng_state.pth | 0.01 MB |
811
+ +---------+------------------------------------------+------------+ - [learning2rank.py:2785:_save]
812
+ 2025-06-12 21:15:32,942 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [learning2rank.py:2759:_save]
813
+ 2025-06-12 21:15:32,970 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [learning2rank.py:2764:_save]
814
+ 2025-06-12 21:15:32,973 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b:
815
+ +---------+------------------------------------------+------------+
816
+ | Index | Saved File | Size |
817
+ +=========+==========================================+============+
818
+ | 1 | eval_loss_plot.png | 0.03 MB |
819
+ +---------+------------------------------------------+------------+
820
+ | 2 | training_args.bin | 0.01 MB |
821
+ +---------+------------------------------------------+------------+
822
+ | 3 | tokenizer.model | 0.56 MB |
823
+ +---------+------------------------------------------+------------+
824
+ | 4 | tokenizer.json | 3.50 MB |
825
+ +---------+------------------------------------------+------------+
826
+ | 5 | model.safetensors | 4122.74 MB |
827
+ +---------+------------------------------------------+------------+
828
+ | 6 | eval_ndcg@25_plot.png | 0.03 MB |
829
+ +---------+------------------------------------------+------------+
830
+ | 7 | config.json | 0.38 MB |
831
+ +---------+------------------------------------------+------------+
832
+ | 8 | special_tokens_map.json | 0.00 MB |
833
+ +---------+------------------------------------------+------------+
834
+ | 9 | lookup_table.pt | 992.75 MB |
835
+ +---------+------------------------------------------+------------+
836
+ | 10 | tokenizer_config.json | 0.13 MB |
837
+ +---------+------------------------------------------+------------+
838
+ | 11 | train_loss_plot.png | 0.04 MB |
839
+ +---------+------------------------------------------+------------+
840
+ | 12 | training_l2r_log_2025-06-12_16-06-10.log | 0.05 MB |
841
+ +---------+------------------------------------------+------------+
842
+ | 13 | README.md | 0.00 MB |
843
+ +---------+------------------------------------------+------------+
844
+ | 14 | checkpoint-44/training_args.bin | 0.01 MB |
845
+ +---------+------------------------------------------+------------+
846
+ | 15 | checkpoint-44/optimizer.pt | 0.00 MB |
847
+ +---------+------------------------------------------+------------+
848
+ | 16 | checkpoint-44/model.safetensors | 4122.74 MB |
849
+ +---------+------------------------------------------+------------+
850
+ | 17 | checkpoint-44/scaler.pt | 0.00 MB |
851
+ +---------+------------------------------------------+------------+
852
+ | 18 | checkpoint-44/config.json | 0.38 MB |
853
+ +---------+------------------------------------------+------------+
854
+ | 19 | checkpoint-44/scheduler.pt | 0.00 MB |
855
+ +---------+------------------------------------------+------------+
856
+ | 20 | checkpoint-44/trainer_state.json | 0.00 MB |
857
+ +---------+------------------------------------------+------------+
858
+ | 21 | checkpoint-44/rng_state.pth | 0.01 MB |
859
+ +---------+------------------------------------------+------------+
860
+ | 22 | ground_model/adapter_config.json | 0.00 MB |
861
+ +---------+------------------------------------------+------------+
862
+ | 23 | ground_model/adapter_model.safetensors | 52.03 MB |
863
+ +---------+------------------------------------------+------------+
864
+ | 24 | ground_model/README.md | 0.00 MB |
865
+ +---------+------------------------------------------+------------+
866
+ | 25 | checkpoint-176/training_args.bin | 0.01 MB |
867
+ +---------+------------------------------------------+------------+
868
+ | 26 | checkpoint-176/optimizer.pt | 352.39 MB |
869
+ +---------+------------------------------------------+------------+
870
+ | 27 | checkpoint-176/model.safetensors | 4122.74 MB |
871
+ +---------+------------------------------------------+------------+
872
+ | 28 | checkpoint-176/scaler.pt | 0.00 MB |
873
+ +---------+------------------------------------------+------------+
874
+ | 29 | checkpoint-176/config.json | 0.38 MB |
875
+ +---------+------------------------------------------+------------+
876
+ | 30 | checkpoint-176/scheduler.pt | 0.00 MB |
877
+ +---------+------------------------------------------+------------+
878
+ | 31 | checkpoint-176/trainer_state.json | 0.00 MB |
879
+ +---------+------------------------------------------+------------+
880
+ | 32 | checkpoint-176/rng_state.pth | 0.01 MB |
881
+ +---------+------------------------------------------+------------+
882
+ | 33 | checkpoint-215/training_args.bin | 0.01 MB |
883
+ +---------+------------------------------------------+------------+
884
+ | 34 | checkpoint-215/optimizer.pt | 352.39 MB |
885
+ +---------+------------------------------------------+------------+
886
+ | 35 | checkpoint-215/model.safetensors | 4122.74 MB |
887
+ +---------+------------------------------------------+------------+
888
+ | 36 | checkpoint-215/scaler.pt | 0.00 MB |
889
+ +---------+------------------------------------------+------------+
890
+ | 37 | checkpoint-215/config.json | 0.38 MB |
891
+ +---------+------------------------------------------+------------+
892
+ | 38 | checkpoint-215/scheduler.pt | 0.00 MB |
893
+ +---------+------------------------------------------+------------+
894
+ | 39 | checkpoint-215/trainer_state.json | 0.01 MB |
895
+ +---------+------------------------------------------+------------+
896
+ | 40 | checkpoint-215/rng_state.pth | 0.01 MB |
897
+ +---------+------------------------------------------+------------+
898
+ | 41 | checkpoint-88/training_args.bin | 0.01 MB |
899
+ +---------+------------------------------------------+------------+
900
+ | 42 | checkpoint-88/optimizer.pt | 352.39 MB |
901
+ +---------+------------------------------------------+------------+
902
+ | 43 | checkpoint-88/model.safetensors | 4122.74 MB |
903
+ +---------+------------------------------------------+------------+
904
+ | 44 | checkpoint-88/scaler.pt | 0.00 MB |
905
+ +---------+------------------------------------------+------------+
906
+ | 45 | checkpoint-88/config.json | 0.38 MB |
907
+ +---------+------------------------------------------+------------+
908
+ | 46 | checkpoint-88/scheduler.pt | 0.00 MB |
909
+ +---------+------------------------------------------+------------+
910
+ | 47 | checkpoint-88/trainer_state.json | 0.00 MB |
911
+ +---------+------------------------------------------+------------+
912
+ | 48 | checkpoint-88/rng_state.pth | 0.01 MB |
913
+ +---------+------------------------------------------+------------+
914
+ | 49 | checkpoint-132/training_args.bin | 0.01 MB |
915
+ +---------+------------------------------------------+------------+
916
+ | 50 | checkpoint-132/optimizer.pt | 352.39 MB |
917
+ +---------+------------------------------------------+------------+
918
+ | 51 | checkpoint-132/model.safetensors | 4122.74 MB |
919
+ +---------+------------------------------------------+------------+
920
+ | 52 | checkpoint-132/scaler.pt | 0.00 MB |
921
+ +---------+------------------------------------------+------------+
922
+ | 53 | checkpoint-132/config.json | 0.38 MB |
923
+ +---------+------------------------------------------+------------+
924
+ | 54 | checkpoint-132/scheduler.pt | 0.00 MB |
925
+ +---------+------------------------------------------+------------+
926
+ | 55 | checkpoint-132/trainer_state.json | 0.00 MB |
927
+ +---------+------------------------------------------+------------+
928
+ | 56 | checkpoint-132/rng_state.pth | 0.01 MB |
929
+ +---------+------------------------------------------+------------+ - [learning2rank.py:2785:_save]
930
+ 2025-06-12 21:16:57,552 - INFO - 💾 Model saved to: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [learning2rank.py:3163:save_and_push]
931
+ 2025-06-12 21:17:02,465 - INFO - ✅ Model, ground_model, and lookup_table saved to ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [core.py:404:save_pretrained]
932
+ 2025-06-12 21:17:02,465 - INFO - 💾 Model and config explicitly saved to: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [learning2rank.py:3169:save_and_push]
933
+ 2025-06-12 21:17:02,499 - INFO - 🖌️ Tokenizer saved to: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b - [learning2rank.py:3173:save_and_push]
934
+ 2025-06-12 21:17:03,375 - INFO - 📊 Lookup table saved to: ../tmp/MIMIC4_DEMO/mimic4_l2rtrain_mistral7b/lookup_table.pt - [learning2rank.py:3178:save_and_push]