phi3_15epochs / output.log
Surabhi-K's picture
Upload 6 files
ab0cf6d verified
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Token has not been saved to git credential helper. Pass `add_to_git_credential=True` if you want to set the git credential as well.
Token is valid (permission: write).
Your token has been saved to /root/.cache/huggingface/token
Login successful
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
adding: kaggle/working/ (stored 0%)
adding: kaggle/working/trainer/ (stored 0%)
adding: kaggle/working/trainer/checkpoint-1652/ (stored 0%)
adding: kaggle/working/trainer/checkpoint-1652/README.md (deflated 66%)
adding: kaggle/working/trainer/checkpoint-1652/rng_state.pth (deflated 25%)
adding: kaggle/working/trainer/checkpoint-1652/trainer_state.json (deflated 79%)
adding: kaggle/working/trainer/checkpoint-1652/adapter_config.json (deflated 52%)
adding: kaggle/working/trainer/checkpoint-1652/optimizer.pt (deflated 18%)
adding: kaggle/working/trainer/checkpoint-1652/training_args.bin (deflated 51%)
adding: kaggle/working/trainer/checkpoint-1652/adapter_model.safetensors (deflated 7%)
adding: kaggle/working/trainer/checkpoint-1652/scheduler.pt (deflated 56%)
adding: kaggle/working/trainer/checkpoint-1416/ (stored 0%)
adding: kaggle/working/trainer/checkpoint-1416/README.md (deflated 66%)
adding: kaggle/working/trainer/checkpoint-1416/rng_state.pth (deflated 25%)
adding: kaggle/working/trainer/checkpoint-1416/trainer_state.json (deflated 79%)
adding: kaggle/working/trainer/checkpoint-1416/adapter_config.json (deflated 52%)
adding: kaggle/working/trainer/checkpoint-1416/optimizer.pt (deflated 17%)
adding: kaggle/working/trainer/checkpoint-1416/training_args.bin (deflated 51%)
adding: kaggle/working/trainer/checkpoint-1416/adapter_model.safetensors (deflated 7%)
adding: kaggle/working/trainer/checkpoint-1416/scheduler.pt (deflated 55%)
adding: kaggle/working/trainer/README.md (deflated 47%)
adding: kaggle/working/trainer/checkpoint-1534/ (stored 0%)
adding: kaggle/working/trainer/checkpoint-1534/README.md (deflated 66%)
adding: kaggle/working/trainer/checkpoint-1534/rng_state.pth (deflated 25%)
adding: kaggle/working/trainer/checkpoint-1534/trainer_state.json (deflated 79%)
adding: kaggle/working/trainer/checkpoint-1534/adapter_config.json (deflated 52%)
adding: kaggle/working/trainer/checkpoint-1534/optimizer.pt (deflated 17%)
adding: kaggle/working/trainer/checkpoint-1534/training_args.bin (deflated 51%)
adding: kaggle/working/trainer/checkpoint-1534/adapter_model.safetensors (deflated 7%)
adding: kaggle/working/trainer/checkpoint-1534/scheduler.pt (deflated 55%)
adding: kaggle/working/trainer/adapter_config.json (deflated 52%)
adding: kaggle/working/trainer/checkpoint-590/ (stored 0%)
adding: kaggle/working/trainer/checkpoint-590/README.md (deflated 66%)
adding: kaggle/working/trainer/checkpoint-590/rng_state.pth (deflated 25%)
adding: kaggle/working/trainer/checkpoint-590/trainer_state.json (deflated 72%)
adding: kaggle/working/trainer/checkpoint-590/adapter_config.json (deflated 52%)
adding: kaggle/working/trainer/checkpoint-590/optimizer.pt (deflated 17%)
adding: kaggle/working/trainer/checkpoint-590/training_args.bin (deflated 51%)
adding: kaggle/working/trainer/checkpoint-590/adapter_model.safetensors (deflated 7%)
adding: kaggle/working/trainer/checkpoint-590/scheduler.pt (deflated 56%)
adding: kaggle/working/trainer/training_args.bin (deflated 51%)
adding: kaggle/working/trainer/checkpoint-1770/ (stored 0%)
adding: kaggle/working/trainer/checkpoint-1770/README.md (deflated 66%)
adding: kaggle/working/trainer/checkpoint-1770/rng_state.pth (deflated 25%)
adding: kaggle/working/trainer/checkpoint-1770/trainer_state.json (deflated 80%)
adding: kaggle/working/trainer/checkpoint-1770/adapter_config.json (deflated 52%)
adding: kaggle/working/trainer/checkpoint-1770/optimizer.pt (deflated 17%)
adding: kaggle/working/trainer/checkpoint-1770/training_args.bin (deflated 51%)