BLIP_Captioning / README.md
Vrjb's picture
Model save
ac6f53d verified
|
raw
history blame
2.58 kB
metadata
library_name: transformers
license: bsd-3-clause
base_model: Salesforce/blip-image-captioning-base
tags:
  - generated_from_trainer
model-index:
  - name: BLIP_Captioning
    results: []

BLIP_Captioning

This model is a fine-tuned version of Salesforce/blip-image-captioning-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0201

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
0.047 0.0446 500 0.0336
0.0379 0.0892 1000 0.0419
0.0285 0.1339 1500 0.0247
0.0247 0.1785 2000 0.0254
0.0244 0.2231 2500 0.0238
0.0242 0.2677 3000 0.0240
0.0239 0.3124 3500 0.0234
0.0243 0.3570 4000 0.0235
0.0502 0.4016 4500 0.0350
0.0236 0.4462 5000 0.0227
0.0228 0.4909 5500 0.0228
0.0225 0.5355 6000 0.0249
0.0232 0.5801 6500 0.0846
0.0222 0.6247 7000 0.0223
0.023 0.6693 7500 0.0213
0.0217 0.7140 8000 0.0211
0.0212 0.7586 8500 0.0210
0.0213 0.8032 9000 0.0207
0.0217 0.8478 9500 0.0204
0.0203 0.8925 10000 0.0208
0.0205 0.9371 10500 0.0206
0.0207 0.9817 11000 0.0201

Framework versions

  • Transformers 4.55.4
  • Pytorch 2.1.2+cu121
  • Tokenizers 0.21.4