RichardErkhov commited on
Commit
ed0a5f3
·
verified ·
1 Parent(s): 923e426

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ mfuyu_llava_v3_8192_480p - bnb 4bits
11
+ - Model creator: https://huggingface.co/Mantis-VL/
12
+ - Original model: https://huggingface.co/Mantis-VL/mfuyu_llava_v3_8192_480p/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: cc-by-nc-4.0
20
+ base_model: MFuyu/mfuyu_llava_8192_480p
21
+ tags:
22
+ - generated_from_trainer
23
+ model-index:
24
+ - name: mfuyu_llava_v3_8192_480p
25
+ results: []
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # mfuyu_llava_v3_8192_480p
32
+
33
+ This model is a fine-tuned version of [MFuyu/mfuyu_llava_8192_480p](https://huggingface.co/MFuyu/mfuyu_llava_8192_480p) on an unknown dataset.
34
+
35
+ ## Model description
36
+
37
+ More information needed
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 1e-05
53
+ - train_batch_size: 1
54
+ - eval_batch_size: 1
55
+ - seed: 42
56
+ - distributed_type: multi-GPU
57
+ - num_devices: 16
58
+ - gradient_accumulation_steps: 4
59
+ - total_train_batch_size: 64
60
+ - total_eval_batch_size: 16
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: cosine
63
+ - lr_scheduler_warmup_ratio: 0.03
64
+ - num_epochs: 3.0
65
+
66
+ ### Training results
67
+
68
+
69
+
70
+ ### Framework versions
71
+
72
+ - Transformers 4.37.0
73
+ - Pytorch 2.2.1
74
+ - Datasets 2.17.1
75
+ - Tokenizers 0.15.2
76
+
77
+