Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- LICENSE.txt +79 -0
- added_tokens.json +28 -0
- config.json +60 -0
- generation_config.json +6 -0
- merges.txt +0 -0
- model-00001-of-00003.safetensors +3 -0
- model-00002-of-00003.safetensors +3 -0
- model-00003-of-00003.safetensors +3 -0
- model.safetensors.index.json +757 -0
- modular_isaac.py +1496 -0
- special_tokens_map.json +31 -0
- tokenizer.json +3 -0
- tokenizer_config.json +241 -0
- vocab.json +0 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
LICENSE.txt
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Perceptron, Inc. Non-Production License
|
2 |
+
|
3 |
+
## 1. Scope and acceptance
|
4 |
+
|
5 |
+
**1.1. Scope of the Agreement.**
|
6 |
+
This Agreement applies to any use, modification, or Distribution of any Perceptron Model by You, regardless of the source You obtained a copy of such Perceptron Model.
|
7 |
+
|
8 |
+
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Perceptron Model, or by creating, using or distributing a Derivative of the Perceptron Model, You agree to be bound by this Agreement.
|
9 |
+
|
10 |
+
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on behalf of Your employer or another person or entity, You warrant and represent that You have the authority to act and accept this Agreement on their behalf. In such a case, the word “You” in this Agreement will refer to Your employer or such other person or entity.
|
11 |
+
|
12 |
+
## 2. License
|
13 |
+
**2.1. Grant of rights.** Subject to Section 3 below, Perceptron, Inc. hereby grants You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable, limited license to use, copy, modify, and Distribute under the conditions provided in Section 2.2 below, the Perceptron Model and any Derivatives made by or for Perceptron, Inc. and to create Derivatives of the Perceptron Model.
|
14 |
+
|
15 |
+
**2.2. Distribution of Perceptron Model and Derivatives made by or for Perceptron, Inc..** Subject to Section 3 below, You may Distribute copies of the Perceptron Model and/or Derivatives made by or for Perceptron, Inc., under the following conditions:
|
16 |
+
- You must make available a copy of this Agreement to third-party recipients of the Perceptron Models and/or Derivatives made by or for Perceptron, Inc. you Distribute, it being specified that any rights to use the Perceptron Models and/or Derivatives made by or for Perceptron, Inc. shall be directly granted by Perceptron, Inc. to said third-party recipients pursuant to the Perceptron, Inc. Non-Production License agreement executed between these parties;
|
17 |
+
- You must retain in all copies of the Perceptron Models the following attribution notice within a “Notice” text file distributed as part of such copies: “Licensed by Perceptron, Inc. under the Perceptron, Inc. Non-Production License”.
|
18 |
+
|
19 |
+
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below, You may Distribute any Derivatives made by or for You under additional or different terms and conditions, provided that:
|
20 |
+
- In any event, the use and modification of Perceptron Model and/or Derivatives made by or for Perceptron, Inc. shall remain governed by the terms and conditions of this Agreement;
|
21 |
+
- You include in any such Derivatives made by or for You prominent notices stating that You modified the concerned Perceptron Model; and
|
22 |
+
- Any terms and conditions You impose on any third-party recipients relating to Derivatives made by or for You shall neither limit such third-party recipients’ use of the Perceptron Model or any Derivatives made by or for Perceptron, Inc. in accordance with the Perceptron, Inc. Non-Production License nor conflict with any of its terms and conditions.
|
23 |
+
|
24 |
+
## 3. Limitations
|
25 |
+
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means, that the Derivatives made by or for You and/or any modified version of the Perceptron Model You Distribute under your name and responsibility is an official product of Perceptron, Inc. or has been endorsed, approved or validated by Perceptron, Inc., unless You are authorized by Us to do so in writing.
|
26 |
+
|
27 |
+
**3.2. Usage Limitation**
|
28 |
+
- You shall only use the Perceptron Models and Derivatives (whether or not created by Perceptron, Inc.) for testing, research, Personal, or evaluation purposes in Non-Production Environments;
|
29 |
+
- Subject to the foregoing, You shall not supply the Perceptron Models or Derivatives in the course of a commercial activity, whether in return for payment or free of charge, in any medium or form, including but not limited to through a hosted or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
|
30 |
+
|
31 |
+
**3.3. Usage not permitted under this Agreement.** If You want to use a Perceptron Model or a Derivative for any purpose that is not expressly authorized under this Agreement, You must request a license from Perceptron, Inc., which Perceptron, Inc. may grant to You in Perceptron, Inc.’s sole discretion. Please contact Perceptron, Inc. at the following e-mail address if You want to discuss such a license: [email protected]
|
32 |
+
|
33 |
+
## 4. Intellectual Property
|
34 |
+
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and in connection with the Perceptron Models, You may not use any name or mark owned by or associated with Perceptron, Inc. or any of its affiliates, except (i) as required for reasonable and customary use in describing and Distributing the Perceptron Models and Derivatives made by or for Perceptron, Inc. and (ii) for attribution purposes as required by this Agreement.
|
35 |
+
|
36 |
+
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely responsible for the Outputs You generate and their subsequent uses in accordance with this Agreement.
|
37 |
+
|
38 |
+
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives that You may create or that may be created for You shall be subject to the restrictions set out in Section 3 of this Agreement.
|
39 |
+
|
40 |
+
# 5. Liability
|
41 |
+
**5.1. Limitation of liability.** In no event, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall Perceptron, Inc. be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this Agreement or out of the use or inability to use the Perceptron Models and Derivatives (including but not limited to damages for loss of data, loss of goodwill, loss of expected profit or savings, work stoppage, computer failure or malfunction, or any damage caused by malware or security breaches), even if Perceptron, Inc. has been advised of the possibility of such damages.
|
42 |
+
|
43 |
+
**5.2. Indemnification.** You agree to indemnify and hold harmless Perceptron, Inc. from and against any claims, damages, or losses arising out of or related to Your use or Distribution of the Perceptron Models and Derivatives.
|
44 |
+
|
45 |
+
## 6. Warranty
|
46 |
+
**6.1. Disclaimer.** Unless required by applicable law or agreed to in writing, Perceptron, Inc. provides the Perceptron Models and Derivatives on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Perceptron, Inc. does not represent nor warrant that the Perceptron Models and Derivatives will be error-free, meet Your or any third party’s requirements, be secure or will allow You or any third party to achieve any kind of result or generate any kind of content. You are solely responsible for determining the appropriateness of using or Distributing the Perceptron Models and Derivatives and assume any risks associated with Your exercise of rights under this Agreement.
|
47 |
+
|
48 |
+
# 7. Termination
|
49 |
+
**7.1. Term.** This Agreement is effective as of the date of your acceptance of this Agreement or access to the concerned Perceptron Models or Derivatives and will continue until terminated in accordance with the following terms.
|
50 |
+
|
51 |
+
**7.2. Termination.** Perceptron, Inc. may terminate this Agreement at any time if You are in breach of this Agreement. Upon termination of this Agreement, You must cease to use all Perceptron Models and Derivatives and shall permanently delete any copy thereof. Sections 5, 6, 7 and 8 shall survive the termination of this Agreement.
|
52 |
+
|
53 |
+
**7.3. Litigation.** If You initiate any legal action or proceedings against Us or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging that the Model or a Derivative, or any part thereof, infringe upon intellectual property or other rights owned or licensable by You, then any licenses granted to You under this Agreement will immediately terminate as of the date such legal action or claim is filed or initiated.
|
54 |
+
|
55 |
+
# 8. General provisions
|
56 |
+
8.1. Governing Law. This Agreement will be governed by and construed in accordance with the laws of the State of Washington, without regard to its conflict of law principles.
|
57 |
+
|
58 |
+
8.2. Jurisdiction. The state and federal courts located in King County, Washington shall have exclusive jurisdiction over any dispute arising out of or relating to this Agreement, and You and We consent to personal jurisdiction and venue in such courts.
|
59 |
+
|
60 |
+
**8.3. Severability.** If any provision of this Agreement is held to be invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as if such provision had not been set forth herein.
|
61 |
+
|
62 |
+
# 9. Definitions
|
63 |
+
**“Agreement”**: means this Perceptron, Inc. Non-Production License agreement governing the access, use, and Distribution of the Perceptron Models and Derivatives.
|
64 |
+
|
65 |
+
**“Derivative”**: means any (i) modified version of the Perceptron Model (including but not limited to any customized or fine-tuned version thereof), (ii) work based on the Perceptron Model, or (iii) any other derivative work thereof. For the avoidance of doubt, Outputs are not considered as Derivatives under this Agreement.
|
66 |
+
|
67 |
+
**“Distribution”**, **“Distributing”**, **“Distribute”** or **“Distributed”**: means providing or making available, by any means, a copy of the Perceptron Models and/or the Derivatives as the case may be, subject to Section 3 of this Agreement.
|
68 |
+
|
69 |
+
**“Perceptron, Inc.”**, **“We”** or **“Us”**: means Perceptron, Inc., a Delaware corporation with its principal place of business at 10900 NE 8th St Suite 613, Bellevue, WA 98004.
|
70 |
+
|
71 |
+
**“Perceptron Model”**: means the foundational large language model(s), and its elements which include algorithms, software, instructed checkpoints, parameters, source code (inference code, evaluation code and, if applicable, fine-tuning code) and any other elements associated thereto made available by Perceptron, Inc. under this Agreement, including, if any, the technical documentation, manuals and instructions for the use and operation thereof.
|
72 |
+
|
73 |
+
**“Non-Production Environment”**: means any setting, use case, or application of the Perceptron Models or Derivatives that expressly excludes live, real-world conditions, commercial operations, revenue-generating activities, or direct interactions with or impacts on end users (such as, for instance, Your employees or customers). Non-Production Environment may include, but is not limited to, any setting, use case, or application for research, development, testing, quality assurance, training, internal evaluation (other than any internal usage by employees in the context of the company’s business activities), and demonstration purposes.
|
74 |
+
|
75 |
+
**“Outputs”**: means any content generated by the operation of the Perceptron Models or the Derivatives from a prompt (i.e., text instructions) provided by users. For the avoidance of doubt, Outputs do not include any components of a Perceptron Models, such as any fine-tuned versions of the Perceptron Models, the weights, or parameters.
|
76 |
+
|
77 |
+
**“Personal”**: means any use of a Perceptron Model or a Derivative that is (i) solely for personal, non-profit and non-commercial purposes and (ii) not directly or indirectly connected to any commercial activities, business operations, or employment responsibilities. For illustration purposes, Personal use of a Model or a Derivative does not include any usage by individuals employed in companies in the context of their daily tasks, any activity that is intended to generate revenue, or that is performed on behalf of a commercial entity.
|
78 |
+
|
79 |
+
**“You”**: means the individual or entity entering into this Agreement with Perceptron, Inc..
|
added_tokens.json
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"</think>": 151668,
|
3 |
+
"</tool_call>": 151658,
|
4 |
+
"</tool_response>": 151666,
|
5 |
+
"<think>": 151667,
|
6 |
+
"<tool_call>": 151657,
|
7 |
+
"<tool_response>": 151665,
|
8 |
+
"<|box_end|>": 151649,
|
9 |
+
"<|box_start|>": 151648,
|
10 |
+
"<|endoftext|>": 151643,
|
11 |
+
"<|file_sep|>": 151664,
|
12 |
+
"<|fim_middle|>": 151660,
|
13 |
+
"<|fim_pad|>": 151662,
|
14 |
+
"<|fim_prefix|>": 151659,
|
15 |
+
"<|fim_suffix|>": 151661,
|
16 |
+
"<|im_end|>": 151645,
|
17 |
+
"<|im_start|>": 151644,
|
18 |
+
"<|image_pad|>": 151655,
|
19 |
+
"<|object_ref_end|>": 151647,
|
20 |
+
"<|object_ref_start|>": 151646,
|
21 |
+
"<|quad_end|>": 151651,
|
22 |
+
"<|quad_start|>": 151650,
|
23 |
+
"<|repo_name|>": 151663,
|
24 |
+
"<|video_pad|>": 151656,
|
25 |
+
"<|vision_end|>": 151653,
|
26 |
+
"<|vision_pad|>": 151654,
|
27 |
+
"<|vision_start|>": 151652
|
28 |
+
}
|
config.json
ADDED
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"IsaacForConditionalGeneration"
|
4 |
+
],
|
5 |
+
"auto_map": {
|
6 |
+
"AutoProcessor": "modular_isaac.IsaacProcessor",
|
7 |
+
"AutoConfig": "modular_isaac.IsaacConfig",
|
8 |
+
"AutoModelForCausalLM": "modular_isaac.IsaacForConditionalGeneration"
|
9 |
+
},
|
10 |
+
"attention_bias": false,
|
11 |
+
"attention_dropout": 0.0,
|
12 |
+
"bos_token_id": 151643,
|
13 |
+
"eos_token_id": 151645,
|
14 |
+
"head_dim": 128,
|
15 |
+
"hidden_act": "silu",
|
16 |
+
"hidden_size": 2048,
|
17 |
+
"initializer_range": 0.02,
|
18 |
+
"intermediate_size": 6144,
|
19 |
+
"max_position_embeddings": 40960,
|
20 |
+
"max_sequence_length": 16384,
|
21 |
+
"max_window_layers": 28,
|
22 |
+
"model_type": "isaac",
|
23 |
+
"num_attention_heads": 16,
|
24 |
+
"num_hidden_layers": 28,
|
25 |
+
"num_key_value_heads": 8,
|
26 |
+
"pixel_shuffle_scale": 2,
|
27 |
+
"rms_norm_eps": 1e-06,
|
28 |
+
"rope_scaling": {
|
29 |
+
"mrope_interleaved": true,
|
30 |
+
"mrope_section": null,
|
31 |
+
"rope_type": "default"
|
32 |
+
},
|
33 |
+
"rope_theta": 1000000.0,
|
34 |
+
"sliding_window": null,
|
35 |
+
"tie_word_embeddings": false,
|
36 |
+
"torch_dtype": "float32",
|
37 |
+
"transformers_version": "4.51.1",
|
38 |
+
"use_cache": true,
|
39 |
+
"use_sliding_window": false,
|
40 |
+
"video_patch_size": 16,
|
41 |
+
"vision_config": {
|
42 |
+
"attention_dropout": 0.0,
|
43 |
+
"hidden_act": "gelu_pytorch_tanh",
|
44 |
+
"hidden_size": 1152,
|
45 |
+
"image_size": 256,
|
46 |
+
"intermediate_size": 4304,
|
47 |
+
"layer_norm_eps": 1e-06,
|
48 |
+
"model_type": "pixel_shuffle_siglip2",
|
49 |
+
"num_attention_heads": 16,
|
50 |
+
"num_channels": 3,
|
51 |
+
"num_hidden_layers": 27,
|
52 |
+
"num_patches": 256,
|
53 |
+
"patch_size": 16,
|
54 |
+
"pixel_shuffle_scale_factor": 2
|
55 |
+
},
|
56 |
+
"vision_max_num_patches": 6144,
|
57 |
+
"vision_min_num_patches": 256,
|
58 |
+
"vision_token": "<image>",
|
59 |
+
"vocab_size": 151936
|
60 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 151643,
|
4 |
+
"eos_token_id": 151645,
|
5 |
+
"transformers_version": "4.51.1"
|
6 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
model-00001-of-00003.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3d31217bf5365162ae38b4e6a5b27acff8481ef892e9803874cbb49476d0f501
|
3 |
+
size 4969539560
|
model-00002-of-00003.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e133442cabfd18ed5ba13cd21527d0220c78e2989a2778b8849e5835e0995c75
|
3 |
+
size 4054187824
|
model-00003-of-00003.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7d48fec96ee25327332beee7dbd72e4d82a20d8e2c3e7135fcd6ce3bb9229862
|
3 |
+
size 1244659840
|
model.safetensors.index.json
ADDED
@@ -0,0 +1,757 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"metadata": {
|
3 |
+
"total_size": 10268292032
|
4 |
+
},
|
5 |
+
"weight_map": {
|
6 |
+
"lm_head.weight": "model-00003-of-00003.safetensors",
|
7 |
+
"model.embed_tokens.weight": "model-00001-of-00003.safetensors",
|
8 |
+
"model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
9 |
+
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
10 |
+
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
11 |
+
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
12 |
+
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
13 |
+
"model.layers.0.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
14 |
+
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
15 |
+
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
16 |
+
"model.layers.0.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
17 |
+
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
18 |
+
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
19 |
+
"model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
20 |
+
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
21 |
+
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
22 |
+
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
23 |
+
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
24 |
+
"model.layers.1.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
25 |
+
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
26 |
+
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
27 |
+
"model.layers.1.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
28 |
+
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
29 |
+
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
30 |
+
"model.layers.10.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
31 |
+
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
32 |
+
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
33 |
+
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
34 |
+
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
35 |
+
"model.layers.10.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
36 |
+
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
37 |
+
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
38 |
+
"model.layers.10.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
39 |
+
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
40 |
+
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
41 |
+
"model.layers.11.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
42 |
+
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
43 |
+
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
44 |
+
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
45 |
+
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
46 |
+
"model.layers.11.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
47 |
+
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
48 |
+
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
49 |
+
"model.layers.11.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
50 |
+
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
51 |
+
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
52 |
+
"model.layers.12.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
53 |
+
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
54 |
+
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
55 |
+
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
56 |
+
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
57 |
+
"model.layers.12.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
58 |
+
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
59 |
+
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
60 |
+
"model.layers.12.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
61 |
+
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
62 |
+
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
63 |
+
"model.layers.13.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
64 |
+
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
65 |
+
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
66 |
+
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
67 |
+
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
68 |
+
"model.layers.13.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
69 |
+
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
70 |
+
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
71 |
+
"model.layers.13.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
72 |
+
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
73 |
+
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
74 |
+
"model.layers.14.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
75 |
+
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
76 |
+
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
77 |
+
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
78 |
+
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
79 |
+
"model.layers.14.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
80 |
+
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
81 |
+
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
82 |
+
"model.layers.14.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
83 |
+
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
84 |
+
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
85 |
+
"model.layers.15.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
86 |
+
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
87 |
+
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
88 |
+
"model.layers.15.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
89 |
+
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
90 |
+
"model.layers.15.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
91 |
+
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
92 |
+
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
93 |
+
"model.layers.15.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
94 |
+
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
95 |
+
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
96 |
+
"model.layers.16.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
97 |
+
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
98 |
+
"model.layers.16.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
99 |
+
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
100 |
+
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
101 |
+
"model.layers.16.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
102 |
+
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
103 |
+
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
104 |
+
"model.layers.16.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
105 |
+
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
106 |
+
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
107 |
+
"model.layers.17.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
108 |
+
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
109 |
+
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
110 |
+
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
111 |
+
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
112 |
+
"model.layers.17.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
113 |
+
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
114 |
+
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
115 |
+
"model.layers.17.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
116 |
+
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
117 |
+
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
118 |
+
"model.layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
119 |
+
"model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
120 |
+
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
121 |
+
"model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
122 |
+
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
123 |
+
"model.layers.18.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
124 |
+
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
125 |
+
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
126 |
+
"model.layers.18.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
127 |
+
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
128 |
+
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
129 |
+
"model.layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
130 |
+
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
131 |
+
"model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
132 |
+
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
133 |
+
"model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
134 |
+
"model.layers.19.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
|
135 |
+
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
136 |
+
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
137 |
+
"model.layers.19.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
|
138 |
+
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
139 |
+
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
140 |
+
"model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
141 |
+
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
142 |
+
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
143 |
+
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
144 |
+
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
145 |
+
"model.layers.2.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
146 |
+
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
147 |
+
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
148 |
+
"model.layers.2.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
149 |
+
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
150 |
+
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
151 |
+
"model.layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
152 |
+
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
153 |
+
"model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
154 |
+
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
155 |
+
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
156 |
+
"model.layers.20.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
|
157 |
+
"model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
158 |
+
"model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
159 |
+
"model.layers.20.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
|
160 |
+
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
161 |
+
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
162 |
+
"model.layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
163 |
+
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
164 |
+
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
165 |
+
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
166 |
+
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
167 |
+
"model.layers.21.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
|
168 |
+
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
169 |
+
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
170 |
+
"model.layers.21.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
|
171 |
+
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
172 |
+
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
173 |
+
"model.layers.22.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
174 |
+
"model.layers.22.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
175 |
+
"model.layers.22.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
176 |
+
"model.layers.22.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
177 |
+
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
178 |
+
"model.layers.22.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
|
179 |
+
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
180 |
+
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
181 |
+
"model.layers.22.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
|
182 |
+
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
183 |
+
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
184 |
+
"model.layers.23.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
185 |
+
"model.layers.23.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
186 |
+
"model.layers.23.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
187 |
+
"model.layers.23.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
188 |
+
"model.layers.23.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
189 |
+
"model.layers.23.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
|
190 |
+
"model.layers.23.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
191 |
+
"model.layers.23.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
192 |
+
"model.layers.23.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
|
193 |
+
"model.layers.23.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
194 |
+
"model.layers.23.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
195 |
+
"model.layers.24.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
196 |
+
"model.layers.24.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
197 |
+
"model.layers.24.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
198 |
+
"model.layers.24.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
199 |
+
"model.layers.24.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
200 |
+
"model.layers.24.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
|
201 |
+
"model.layers.24.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
202 |
+
"model.layers.24.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
203 |
+
"model.layers.24.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
|
204 |
+
"model.layers.24.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
205 |
+
"model.layers.24.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
206 |
+
"model.layers.25.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
207 |
+
"model.layers.25.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
208 |
+
"model.layers.25.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
209 |
+
"model.layers.25.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
210 |
+
"model.layers.25.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
211 |
+
"model.layers.25.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
|
212 |
+
"model.layers.25.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
213 |
+
"model.layers.25.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
214 |
+
"model.layers.25.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
|
215 |
+
"model.layers.25.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
216 |
+
"model.layers.25.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
217 |
+
"model.layers.26.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
218 |
+
"model.layers.26.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
219 |
+
"model.layers.26.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
220 |
+
"model.layers.26.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
221 |
+
"model.layers.26.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
222 |
+
"model.layers.26.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
|
223 |
+
"model.layers.26.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
224 |
+
"model.layers.26.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
225 |
+
"model.layers.26.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
|
226 |
+
"model.layers.26.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
227 |
+
"model.layers.26.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
228 |
+
"model.layers.27.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
229 |
+
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
230 |
+
"model.layers.27.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
231 |
+
"model.layers.27.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
232 |
+
"model.layers.27.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
233 |
+
"model.layers.27.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
|
234 |
+
"model.layers.27.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
235 |
+
"model.layers.27.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
236 |
+
"model.layers.27.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
|
237 |
+
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
238 |
+
"model.layers.27.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
239 |
+
"model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
240 |
+
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
241 |
+
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
242 |
+
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
243 |
+
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
244 |
+
"model.layers.3.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
245 |
+
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
246 |
+
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
247 |
+
"model.layers.3.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
248 |
+
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
249 |
+
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
250 |
+
"model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
251 |
+
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
252 |
+
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
253 |
+
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
254 |
+
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
255 |
+
"model.layers.4.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
256 |
+
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
257 |
+
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
258 |
+
"model.layers.4.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
259 |
+
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
260 |
+
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
261 |
+
"model.layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
262 |
+
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
263 |
+
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
264 |
+
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
265 |
+
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
266 |
+
"model.layers.5.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
267 |
+
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
268 |
+
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
269 |
+
"model.layers.5.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
270 |
+
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
271 |
+
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
272 |
+
"model.layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
273 |
+
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
274 |
+
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
275 |
+
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
276 |
+
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
277 |
+
"model.layers.6.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
278 |
+
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
279 |
+
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
280 |
+
"model.layers.6.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
281 |
+
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
282 |
+
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
283 |
+
"model.layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
284 |
+
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
285 |
+
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
286 |
+
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
287 |
+
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
288 |
+
"model.layers.7.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
289 |
+
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
290 |
+
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
291 |
+
"model.layers.7.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
292 |
+
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
293 |
+
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
294 |
+
"model.layers.8.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
295 |
+
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
296 |
+
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
297 |
+
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
298 |
+
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
299 |
+
"model.layers.8.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
300 |
+
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
301 |
+
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
302 |
+
"model.layers.8.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
303 |
+
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
304 |
+
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
305 |
+
"model.layers.9.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
306 |
+
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
307 |
+
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
308 |
+
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
309 |
+
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
310 |
+
"model.layers.9.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
|
311 |
+
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
312 |
+
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
313 |
+
"model.layers.9.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
|
314 |
+
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
315 |
+
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
316 |
+
"model.norm.weight": "model-00002-of-00003.safetensors",
|
317 |
+
"model.vision_embedding.0.embeddings.patch_embedding.bias": "model-00002-of-00003.safetensors",
|
318 |
+
"model.vision_embedding.0.embeddings.patch_embedding.weight": "model-00002-of-00003.safetensors",
|
319 |
+
"model.vision_embedding.0.embeddings.position_embedding.weight": "model-00002-of-00003.safetensors",
|
320 |
+
"model.vision_embedding.0.encoder.layers.0.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
321 |
+
"model.vision_embedding.0.encoder.layers.0.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
322 |
+
"model.vision_embedding.0.encoder.layers.0.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
323 |
+
"model.vision_embedding.0.encoder.layers.0.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
324 |
+
"model.vision_embedding.0.encoder.layers.0.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
325 |
+
"model.vision_embedding.0.encoder.layers.0.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
326 |
+
"model.vision_embedding.0.encoder.layers.0.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
327 |
+
"model.vision_embedding.0.encoder.layers.0.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
328 |
+
"model.vision_embedding.0.encoder.layers.0.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
329 |
+
"model.vision_embedding.0.encoder.layers.0.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
330 |
+
"model.vision_embedding.0.encoder.layers.0.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
331 |
+
"model.vision_embedding.0.encoder.layers.0.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
332 |
+
"model.vision_embedding.0.encoder.layers.0.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
333 |
+
"model.vision_embedding.0.encoder.layers.0.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
334 |
+
"model.vision_embedding.0.encoder.layers.0.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
335 |
+
"model.vision_embedding.0.encoder.layers.0.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
336 |
+
"model.vision_embedding.0.encoder.layers.1.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
337 |
+
"model.vision_embedding.0.encoder.layers.1.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
338 |
+
"model.vision_embedding.0.encoder.layers.1.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
339 |
+
"model.vision_embedding.0.encoder.layers.1.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
340 |
+
"model.vision_embedding.0.encoder.layers.1.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
341 |
+
"model.vision_embedding.0.encoder.layers.1.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
342 |
+
"model.vision_embedding.0.encoder.layers.1.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
343 |
+
"model.vision_embedding.0.encoder.layers.1.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
344 |
+
"model.vision_embedding.0.encoder.layers.1.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
345 |
+
"model.vision_embedding.0.encoder.layers.1.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
346 |
+
"model.vision_embedding.0.encoder.layers.1.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
347 |
+
"model.vision_embedding.0.encoder.layers.1.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
348 |
+
"model.vision_embedding.0.encoder.layers.1.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
349 |
+
"model.vision_embedding.0.encoder.layers.1.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
350 |
+
"model.vision_embedding.0.encoder.layers.1.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
351 |
+
"model.vision_embedding.0.encoder.layers.1.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
352 |
+
"model.vision_embedding.0.encoder.layers.10.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
353 |
+
"model.vision_embedding.0.encoder.layers.10.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
354 |
+
"model.vision_embedding.0.encoder.layers.10.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
355 |
+
"model.vision_embedding.0.encoder.layers.10.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
356 |
+
"model.vision_embedding.0.encoder.layers.10.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
357 |
+
"model.vision_embedding.0.encoder.layers.10.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
358 |
+
"model.vision_embedding.0.encoder.layers.10.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
359 |
+
"model.vision_embedding.0.encoder.layers.10.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
360 |
+
"model.vision_embedding.0.encoder.layers.10.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
361 |
+
"model.vision_embedding.0.encoder.layers.10.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
362 |
+
"model.vision_embedding.0.encoder.layers.10.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
363 |
+
"model.vision_embedding.0.encoder.layers.10.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
364 |
+
"model.vision_embedding.0.encoder.layers.10.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
365 |
+
"model.vision_embedding.0.encoder.layers.10.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
366 |
+
"model.vision_embedding.0.encoder.layers.10.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
367 |
+
"model.vision_embedding.0.encoder.layers.10.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
368 |
+
"model.vision_embedding.0.encoder.layers.11.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
369 |
+
"model.vision_embedding.0.encoder.layers.11.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
370 |
+
"model.vision_embedding.0.encoder.layers.11.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
371 |
+
"model.vision_embedding.0.encoder.layers.11.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
372 |
+
"model.vision_embedding.0.encoder.layers.11.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
373 |
+
"model.vision_embedding.0.encoder.layers.11.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
374 |
+
"model.vision_embedding.0.encoder.layers.11.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
375 |
+
"model.vision_embedding.0.encoder.layers.11.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
376 |
+
"model.vision_embedding.0.encoder.layers.11.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
377 |
+
"model.vision_embedding.0.encoder.layers.11.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
378 |
+
"model.vision_embedding.0.encoder.layers.11.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
379 |
+
"model.vision_embedding.0.encoder.layers.11.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
380 |
+
"model.vision_embedding.0.encoder.layers.11.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
381 |
+
"model.vision_embedding.0.encoder.layers.11.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
382 |
+
"model.vision_embedding.0.encoder.layers.11.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
383 |
+
"model.vision_embedding.0.encoder.layers.11.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
384 |
+
"model.vision_embedding.0.encoder.layers.12.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
385 |
+
"model.vision_embedding.0.encoder.layers.12.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
386 |
+
"model.vision_embedding.0.encoder.layers.12.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
387 |
+
"model.vision_embedding.0.encoder.layers.12.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
388 |
+
"model.vision_embedding.0.encoder.layers.12.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
389 |
+
"model.vision_embedding.0.encoder.layers.12.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
390 |
+
"model.vision_embedding.0.encoder.layers.12.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
391 |
+
"model.vision_embedding.0.encoder.layers.12.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
392 |
+
"model.vision_embedding.0.encoder.layers.12.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
393 |
+
"model.vision_embedding.0.encoder.layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
394 |
+
"model.vision_embedding.0.encoder.layers.12.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
395 |
+
"model.vision_embedding.0.encoder.layers.12.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
396 |
+
"model.vision_embedding.0.encoder.layers.12.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
397 |
+
"model.vision_embedding.0.encoder.layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
398 |
+
"model.vision_embedding.0.encoder.layers.12.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
399 |
+
"model.vision_embedding.0.encoder.layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
400 |
+
"model.vision_embedding.0.encoder.layers.13.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
401 |
+
"model.vision_embedding.0.encoder.layers.13.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
402 |
+
"model.vision_embedding.0.encoder.layers.13.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
403 |
+
"model.vision_embedding.0.encoder.layers.13.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
404 |
+
"model.vision_embedding.0.encoder.layers.13.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
405 |
+
"model.vision_embedding.0.encoder.layers.13.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
406 |
+
"model.vision_embedding.0.encoder.layers.13.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
407 |
+
"model.vision_embedding.0.encoder.layers.13.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
408 |
+
"model.vision_embedding.0.encoder.layers.13.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
409 |
+
"model.vision_embedding.0.encoder.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
410 |
+
"model.vision_embedding.0.encoder.layers.13.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
411 |
+
"model.vision_embedding.0.encoder.layers.13.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
412 |
+
"model.vision_embedding.0.encoder.layers.13.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
413 |
+
"model.vision_embedding.0.encoder.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
414 |
+
"model.vision_embedding.0.encoder.layers.13.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
415 |
+
"model.vision_embedding.0.encoder.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
416 |
+
"model.vision_embedding.0.encoder.layers.14.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
417 |
+
"model.vision_embedding.0.encoder.layers.14.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
418 |
+
"model.vision_embedding.0.encoder.layers.14.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
419 |
+
"model.vision_embedding.0.encoder.layers.14.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
420 |
+
"model.vision_embedding.0.encoder.layers.14.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
421 |
+
"model.vision_embedding.0.encoder.layers.14.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
422 |
+
"model.vision_embedding.0.encoder.layers.14.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
423 |
+
"model.vision_embedding.0.encoder.layers.14.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
424 |
+
"model.vision_embedding.0.encoder.layers.14.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
425 |
+
"model.vision_embedding.0.encoder.layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
426 |
+
"model.vision_embedding.0.encoder.layers.14.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
427 |
+
"model.vision_embedding.0.encoder.layers.14.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
428 |
+
"model.vision_embedding.0.encoder.layers.14.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
429 |
+
"model.vision_embedding.0.encoder.layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
430 |
+
"model.vision_embedding.0.encoder.layers.14.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
431 |
+
"model.vision_embedding.0.encoder.layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
432 |
+
"model.vision_embedding.0.encoder.layers.15.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
433 |
+
"model.vision_embedding.0.encoder.layers.15.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
434 |
+
"model.vision_embedding.0.encoder.layers.15.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
435 |
+
"model.vision_embedding.0.encoder.layers.15.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
436 |
+
"model.vision_embedding.0.encoder.layers.15.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
437 |
+
"model.vision_embedding.0.encoder.layers.15.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
438 |
+
"model.vision_embedding.0.encoder.layers.15.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
439 |
+
"model.vision_embedding.0.encoder.layers.15.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
440 |
+
"model.vision_embedding.0.encoder.layers.15.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
441 |
+
"model.vision_embedding.0.encoder.layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
442 |
+
"model.vision_embedding.0.encoder.layers.15.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
443 |
+
"model.vision_embedding.0.encoder.layers.15.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
444 |
+
"model.vision_embedding.0.encoder.layers.15.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
445 |
+
"model.vision_embedding.0.encoder.layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
446 |
+
"model.vision_embedding.0.encoder.layers.15.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
447 |
+
"model.vision_embedding.0.encoder.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
448 |
+
"model.vision_embedding.0.encoder.layers.16.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
449 |
+
"model.vision_embedding.0.encoder.layers.16.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
450 |
+
"model.vision_embedding.0.encoder.layers.16.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
451 |
+
"model.vision_embedding.0.encoder.layers.16.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
452 |
+
"model.vision_embedding.0.encoder.layers.16.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
453 |
+
"model.vision_embedding.0.encoder.layers.16.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
454 |
+
"model.vision_embedding.0.encoder.layers.16.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
455 |
+
"model.vision_embedding.0.encoder.layers.16.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
456 |
+
"model.vision_embedding.0.encoder.layers.16.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
457 |
+
"model.vision_embedding.0.encoder.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
458 |
+
"model.vision_embedding.0.encoder.layers.16.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
459 |
+
"model.vision_embedding.0.encoder.layers.16.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
460 |
+
"model.vision_embedding.0.encoder.layers.16.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
461 |
+
"model.vision_embedding.0.encoder.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
462 |
+
"model.vision_embedding.0.encoder.layers.16.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
463 |
+
"model.vision_embedding.0.encoder.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
464 |
+
"model.vision_embedding.0.encoder.layers.17.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
465 |
+
"model.vision_embedding.0.encoder.layers.17.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
466 |
+
"model.vision_embedding.0.encoder.layers.17.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
467 |
+
"model.vision_embedding.0.encoder.layers.17.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
468 |
+
"model.vision_embedding.0.encoder.layers.17.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
469 |
+
"model.vision_embedding.0.encoder.layers.17.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
470 |
+
"model.vision_embedding.0.encoder.layers.17.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
471 |
+
"model.vision_embedding.0.encoder.layers.17.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
472 |
+
"model.vision_embedding.0.encoder.layers.17.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
473 |
+
"model.vision_embedding.0.encoder.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
474 |
+
"model.vision_embedding.0.encoder.layers.17.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
475 |
+
"model.vision_embedding.0.encoder.layers.17.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
476 |
+
"model.vision_embedding.0.encoder.layers.17.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
477 |
+
"model.vision_embedding.0.encoder.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
478 |
+
"model.vision_embedding.0.encoder.layers.17.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
479 |
+
"model.vision_embedding.0.encoder.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
480 |
+
"model.vision_embedding.0.encoder.layers.18.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
481 |
+
"model.vision_embedding.0.encoder.layers.18.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
482 |
+
"model.vision_embedding.0.encoder.layers.18.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
483 |
+
"model.vision_embedding.0.encoder.layers.18.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
484 |
+
"model.vision_embedding.0.encoder.layers.18.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
485 |
+
"model.vision_embedding.0.encoder.layers.18.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
486 |
+
"model.vision_embedding.0.encoder.layers.18.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
487 |
+
"model.vision_embedding.0.encoder.layers.18.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
488 |
+
"model.vision_embedding.0.encoder.layers.18.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
489 |
+
"model.vision_embedding.0.encoder.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
490 |
+
"model.vision_embedding.0.encoder.layers.18.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
491 |
+
"model.vision_embedding.0.encoder.layers.18.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
492 |
+
"model.vision_embedding.0.encoder.layers.18.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
493 |
+
"model.vision_embedding.0.encoder.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
494 |
+
"model.vision_embedding.0.encoder.layers.18.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
495 |
+
"model.vision_embedding.0.encoder.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
496 |
+
"model.vision_embedding.0.encoder.layers.19.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
497 |
+
"model.vision_embedding.0.encoder.layers.19.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
498 |
+
"model.vision_embedding.0.encoder.layers.19.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
499 |
+
"model.vision_embedding.0.encoder.layers.19.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
500 |
+
"model.vision_embedding.0.encoder.layers.19.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
501 |
+
"model.vision_embedding.0.encoder.layers.19.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
502 |
+
"model.vision_embedding.0.encoder.layers.19.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
503 |
+
"model.vision_embedding.0.encoder.layers.19.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
504 |
+
"model.vision_embedding.0.encoder.layers.19.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
505 |
+
"model.vision_embedding.0.encoder.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
506 |
+
"model.vision_embedding.0.encoder.layers.19.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
507 |
+
"model.vision_embedding.0.encoder.layers.19.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
508 |
+
"model.vision_embedding.0.encoder.layers.19.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
509 |
+
"model.vision_embedding.0.encoder.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
510 |
+
"model.vision_embedding.0.encoder.layers.19.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
511 |
+
"model.vision_embedding.0.encoder.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
512 |
+
"model.vision_embedding.0.encoder.layers.2.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
513 |
+
"model.vision_embedding.0.encoder.layers.2.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
514 |
+
"model.vision_embedding.0.encoder.layers.2.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
515 |
+
"model.vision_embedding.0.encoder.layers.2.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
516 |
+
"model.vision_embedding.0.encoder.layers.2.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
517 |
+
"model.vision_embedding.0.encoder.layers.2.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
518 |
+
"model.vision_embedding.0.encoder.layers.2.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
519 |
+
"model.vision_embedding.0.encoder.layers.2.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
520 |
+
"model.vision_embedding.0.encoder.layers.2.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
521 |
+
"model.vision_embedding.0.encoder.layers.2.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
522 |
+
"model.vision_embedding.0.encoder.layers.2.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
523 |
+
"model.vision_embedding.0.encoder.layers.2.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
524 |
+
"model.vision_embedding.0.encoder.layers.2.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
525 |
+
"model.vision_embedding.0.encoder.layers.2.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
526 |
+
"model.vision_embedding.0.encoder.layers.2.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
527 |
+
"model.vision_embedding.0.encoder.layers.2.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
528 |
+
"model.vision_embedding.0.encoder.layers.20.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
529 |
+
"model.vision_embedding.0.encoder.layers.20.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
530 |
+
"model.vision_embedding.0.encoder.layers.20.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
531 |
+
"model.vision_embedding.0.encoder.layers.20.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
532 |
+
"model.vision_embedding.0.encoder.layers.20.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
533 |
+
"model.vision_embedding.0.encoder.layers.20.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
534 |
+
"model.vision_embedding.0.encoder.layers.20.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
535 |
+
"model.vision_embedding.0.encoder.layers.20.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
536 |
+
"model.vision_embedding.0.encoder.layers.20.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
537 |
+
"model.vision_embedding.0.encoder.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
538 |
+
"model.vision_embedding.0.encoder.layers.20.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
539 |
+
"model.vision_embedding.0.encoder.layers.20.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
540 |
+
"model.vision_embedding.0.encoder.layers.20.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
541 |
+
"model.vision_embedding.0.encoder.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
542 |
+
"model.vision_embedding.0.encoder.layers.20.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
543 |
+
"model.vision_embedding.0.encoder.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
544 |
+
"model.vision_embedding.0.encoder.layers.21.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
545 |
+
"model.vision_embedding.0.encoder.layers.21.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
546 |
+
"model.vision_embedding.0.encoder.layers.21.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
547 |
+
"model.vision_embedding.0.encoder.layers.21.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
548 |
+
"model.vision_embedding.0.encoder.layers.21.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
549 |
+
"model.vision_embedding.0.encoder.layers.21.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
550 |
+
"model.vision_embedding.0.encoder.layers.21.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
551 |
+
"model.vision_embedding.0.encoder.layers.21.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
552 |
+
"model.vision_embedding.0.encoder.layers.21.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
553 |
+
"model.vision_embedding.0.encoder.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
554 |
+
"model.vision_embedding.0.encoder.layers.21.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
555 |
+
"model.vision_embedding.0.encoder.layers.21.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
556 |
+
"model.vision_embedding.0.encoder.layers.21.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
557 |
+
"model.vision_embedding.0.encoder.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
558 |
+
"model.vision_embedding.0.encoder.layers.21.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
559 |
+
"model.vision_embedding.0.encoder.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
560 |
+
"model.vision_embedding.0.encoder.layers.22.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
561 |
+
"model.vision_embedding.0.encoder.layers.22.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
562 |
+
"model.vision_embedding.0.encoder.layers.22.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
563 |
+
"model.vision_embedding.0.encoder.layers.22.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
564 |
+
"model.vision_embedding.0.encoder.layers.22.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
565 |
+
"model.vision_embedding.0.encoder.layers.22.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
566 |
+
"model.vision_embedding.0.encoder.layers.22.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
567 |
+
"model.vision_embedding.0.encoder.layers.22.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
568 |
+
"model.vision_embedding.0.encoder.layers.22.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
569 |
+
"model.vision_embedding.0.encoder.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
570 |
+
"model.vision_embedding.0.encoder.layers.22.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
571 |
+
"model.vision_embedding.0.encoder.layers.22.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
572 |
+
"model.vision_embedding.0.encoder.layers.22.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
573 |
+
"model.vision_embedding.0.encoder.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
574 |
+
"model.vision_embedding.0.encoder.layers.22.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
575 |
+
"model.vision_embedding.0.encoder.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
576 |
+
"model.vision_embedding.0.encoder.layers.23.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
577 |
+
"model.vision_embedding.0.encoder.layers.23.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
578 |
+
"model.vision_embedding.0.encoder.layers.23.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
579 |
+
"model.vision_embedding.0.encoder.layers.23.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
580 |
+
"model.vision_embedding.0.encoder.layers.23.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
581 |
+
"model.vision_embedding.0.encoder.layers.23.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
582 |
+
"model.vision_embedding.0.encoder.layers.23.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
583 |
+
"model.vision_embedding.0.encoder.layers.23.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
584 |
+
"model.vision_embedding.0.encoder.layers.23.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
585 |
+
"model.vision_embedding.0.encoder.layers.23.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
586 |
+
"model.vision_embedding.0.encoder.layers.23.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
587 |
+
"model.vision_embedding.0.encoder.layers.23.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
588 |
+
"model.vision_embedding.0.encoder.layers.23.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
589 |
+
"model.vision_embedding.0.encoder.layers.23.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
590 |
+
"model.vision_embedding.0.encoder.layers.23.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
591 |
+
"model.vision_embedding.0.encoder.layers.23.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
592 |
+
"model.vision_embedding.0.encoder.layers.24.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
593 |
+
"model.vision_embedding.0.encoder.layers.24.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
594 |
+
"model.vision_embedding.0.encoder.layers.24.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
595 |
+
"model.vision_embedding.0.encoder.layers.24.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
596 |
+
"model.vision_embedding.0.encoder.layers.24.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
597 |
+
"model.vision_embedding.0.encoder.layers.24.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
598 |
+
"model.vision_embedding.0.encoder.layers.24.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
599 |
+
"model.vision_embedding.0.encoder.layers.24.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
600 |
+
"model.vision_embedding.0.encoder.layers.24.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
601 |
+
"model.vision_embedding.0.encoder.layers.24.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
602 |
+
"model.vision_embedding.0.encoder.layers.24.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
603 |
+
"model.vision_embedding.0.encoder.layers.24.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
604 |
+
"model.vision_embedding.0.encoder.layers.24.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
605 |
+
"model.vision_embedding.0.encoder.layers.24.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
606 |
+
"model.vision_embedding.0.encoder.layers.24.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
607 |
+
"model.vision_embedding.0.encoder.layers.24.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
608 |
+
"model.vision_embedding.0.encoder.layers.25.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
609 |
+
"model.vision_embedding.0.encoder.layers.25.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
610 |
+
"model.vision_embedding.0.encoder.layers.25.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
611 |
+
"model.vision_embedding.0.encoder.layers.25.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
612 |
+
"model.vision_embedding.0.encoder.layers.25.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
613 |
+
"model.vision_embedding.0.encoder.layers.25.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
614 |
+
"model.vision_embedding.0.encoder.layers.25.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
615 |
+
"model.vision_embedding.0.encoder.layers.25.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
616 |
+
"model.vision_embedding.0.encoder.layers.25.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
617 |
+
"model.vision_embedding.0.encoder.layers.25.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
618 |
+
"model.vision_embedding.0.encoder.layers.25.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
619 |
+
"model.vision_embedding.0.encoder.layers.25.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
620 |
+
"model.vision_embedding.0.encoder.layers.25.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
621 |
+
"model.vision_embedding.0.encoder.layers.25.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
622 |
+
"model.vision_embedding.0.encoder.layers.25.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
623 |
+
"model.vision_embedding.0.encoder.layers.25.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
624 |
+
"model.vision_embedding.0.encoder.layers.26.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
625 |
+
"model.vision_embedding.0.encoder.layers.26.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
626 |
+
"model.vision_embedding.0.encoder.layers.26.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
627 |
+
"model.vision_embedding.0.encoder.layers.26.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
628 |
+
"model.vision_embedding.0.encoder.layers.26.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
629 |
+
"model.vision_embedding.0.encoder.layers.26.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
630 |
+
"model.vision_embedding.0.encoder.layers.26.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
631 |
+
"model.vision_embedding.0.encoder.layers.26.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
632 |
+
"model.vision_embedding.0.encoder.layers.26.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
633 |
+
"model.vision_embedding.0.encoder.layers.26.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
634 |
+
"model.vision_embedding.0.encoder.layers.26.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
635 |
+
"model.vision_embedding.0.encoder.layers.26.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
636 |
+
"model.vision_embedding.0.encoder.layers.26.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
637 |
+
"model.vision_embedding.0.encoder.layers.26.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
638 |
+
"model.vision_embedding.0.encoder.layers.26.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
639 |
+
"model.vision_embedding.0.encoder.layers.26.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
640 |
+
"model.vision_embedding.0.encoder.layers.3.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
641 |
+
"model.vision_embedding.0.encoder.layers.3.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
642 |
+
"model.vision_embedding.0.encoder.layers.3.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
643 |
+
"model.vision_embedding.0.encoder.layers.3.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
644 |
+
"model.vision_embedding.0.encoder.layers.3.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
645 |
+
"model.vision_embedding.0.encoder.layers.3.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
646 |
+
"model.vision_embedding.0.encoder.layers.3.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
647 |
+
"model.vision_embedding.0.encoder.layers.3.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
648 |
+
"model.vision_embedding.0.encoder.layers.3.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
649 |
+
"model.vision_embedding.0.encoder.layers.3.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
650 |
+
"model.vision_embedding.0.encoder.layers.3.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
651 |
+
"model.vision_embedding.0.encoder.layers.3.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
652 |
+
"model.vision_embedding.0.encoder.layers.3.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
653 |
+
"model.vision_embedding.0.encoder.layers.3.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
654 |
+
"model.vision_embedding.0.encoder.layers.3.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
655 |
+
"model.vision_embedding.0.encoder.layers.3.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
656 |
+
"model.vision_embedding.0.encoder.layers.4.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
657 |
+
"model.vision_embedding.0.encoder.layers.4.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
658 |
+
"model.vision_embedding.0.encoder.layers.4.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
659 |
+
"model.vision_embedding.0.encoder.layers.4.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
660 |
+
"model.vision_embedding.0.encoder.layers.4.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
661 |
+
"model.vision_embedding.0.encoder.layers.4.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
662 |
+
"model.vision_embedding.0.encoder.layers.4.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
663 |
+
"model.vision_embedding.0.encoder.layers.4.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
664 |
+
"model.vision_embedding.0.encoder.layers.4.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
665 |
+
"model.vision_embedding.0.encoder.layers.4.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
666 |
+
"model.vision_embedding.0.encoder.layers.4.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
667 |
+
"model.vision_embedding.0.encoder.layers.4.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
668 |
+
"model.vision_embedding.0.encoder.layers.4.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
669 |
+
"model.vision_embedding.0.encoder.layers.4.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
670 |
+
"model.vision_embedding.0.encoder.layers.4.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
671 |
+
"model.vision_embedding.0.encoder.layers.4.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
672 |
+
"model.vision_embedding.0.encoder.layers.5.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
673 |
+
"model.vision_embedding.0.encoder.layers.5.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
674 |
+
"model.vision_embedding.0.encoder.layers.5.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
675 |
+
"model.vision_embedding.0.encoder.layers.5.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
676 |
+
"model.vision_embedding.0.encoder.layers.5.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
677 |
+
"model.vision_embedding.0.encoder.layers.5.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
678 |
+
"model.vision_embedding.0.encoder.layers.5.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
679 |
+
"model.vision_embedding.0.encoder.layers.5.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
680 |
+
"model.vision_embedding.0.encoder.layers.5.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
681 |
+
"model.vision_embedding.0.encoder.layers.5.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
682 |
+
"model.vision_embedding.0.encoder.layers.5.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
683 |
+
"model.vision_embedding.0.encoder.layers.5.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
684 |
+
"model.vision_embedding.0.encoder.layers.5.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
685 |
+
"model.vision_embedding.0.encoder.layers.5.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
686 |
+
"model.vision_embedding.0.encoder.layers.5.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
687 |
+
"model.vision_embedding.0.encoder.layers.5.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
688 |
+
"model.vision_embedding.0.encoder.layers.6.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
689 |
+
"model.vision_embedding.0.encoder.layers.6.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
690 |
+
"model.vision_embedding.0.encoder.layers.6.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
691 |
+
"model.vision_embedding.0.encoder.layers.6.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
692 |
+
"model.vision_embedding.0.encoder.layers.6.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
693 |
+
"model.vision_embedding.0.encoder.layers.6.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
694 |
+
"model.vision_embedding.0.encoder.layers.6.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
695 |
+
"model.vision_embedding.0.encoder.layers.6.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
696 |
+
"model.vision_embedding.0.encoder.layers.6.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
697 |
+
"model.vision_embedding.0.encoder.layers.6.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
698 |
+
"model.vision_embedding.0.encoder.layers.6.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
699 |
+
"model.vision_embedding.0.encoder.layers.6.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
700 |
+
"model.vision_embedding.0.encoder.layers.6.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
701 |
+
"model.vision_embedding.0.encoder.layers.6.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
702 |
+
"model.vision_embedding.0.encoder.layers.6.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
703 |
+
"model.vision_embedding.0.encoder.layers.6.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
704 |
+
"model.vision_embedding.0.encoder.layers.7.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
705 |
+
"model.vision_embedding.0.encoder.layers.7.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
706 |
+
"model.vision_embedding.0.encoder.layers.7.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
707 |
+
"model.vision_embedding.0.encoder.layers.7.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
708 |
+
"model.vision_embedding.0.encoder.layers.7.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
709 |
+
"model.vision_embedding.0.encoder.layers.7.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
710 |
+
"model.vision_embedding.0.encoder.layers.7.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
711 |
+
"model.vision_embedding.0.encoder.layers.7.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
712 |
+
"model.vision_embedding.0.encoder.layers.7.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
713 |
+
"model.vision_embedding.0.encoder.layers.7.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
714 |
+
"model.vision_embedding.0.encoder.layers.7.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
715 |
+
"model.vision_embedding.0.encoder.layers.7.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
716 |
+
"model.vision_embedding.0.encoder.layers.7.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
717 |
+
"model.vision_embedding.0.encoder.layers.7.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
718 |
+
"model.vision_embedding.0.encoder.layers.7.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
719 |
+
"model.vision_embedding.0.encoder.layers.7.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
720 |
+
"model.vision_embedding.0.encoder.layers.8.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
721 |
+
"model.vision_embedding.0.encoder.layers.8.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
722 |
+
"model.vision_embedding.0.encoder.layers.8.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
723 |
+
"model.vision_embedding.0.encoder.layers.8.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
724 |
+
"model.vision_embedding.0.encoder.layers.8.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
725 |
+
"model.vision_embedding.0.encoder.layers.8.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
726 |
+
"model.vision_embedding.0.encoder.layers.8.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
727 |
+
"model.vision_embedding.0.encoder.layers.8.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
728 |
+
"model.vision_embedding.0.encoder.layers.8.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
729 |
+
"model.vision_embedding.0.encoder.layers.8.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
730 |
+
"model.vision_embedding.0.encoder.layers.8.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
731 |
+
"model.vision_embedding.0.encoder.layers.8.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
732 |
+
"model.vision_embedding.0.encoder.layers.8.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
733 |
+
"model.vision_embedding.0.encoder.layers.8.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
734 |
+
"model.vision_embedding.0.encoder.layers.8.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
735 |
+
"model.vision_embedding.0.encoder.layers.8.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
736 |
+
"model.vision_embedding.0.encoder.layers.9.layer_norm1.bias": "model-00002-of-00003.safetensors",
|
737 |
+
"model.vision_embedding.0.encoder.layers.9.layer_norm1.weight": "model-00002-of-00003.safetensors",
|
738 |
+
"model.vision_embedding.0.encoder.layers.9.layer_norm2.bias": "model-00002-of-00003.safetensors",
|
739 |
+
"model.vision_embedding.0.encoder.layers.9.layer_norm2.weight": "model-00002-of-00003.safetensors",
|
740 |
+
"model.vision_embedding.0.encoder.layers.9.mlp.fc1.bias": "model-00002-of-00003.safetensors",
|
741 |
+
"model.vision_embedding.0.encoder.layers.9.mlp.fc1.weight": "model-00002-of-00003.safetensors",
|
742 |
+
"model.vision_embedding.0.encoder.layers.9.mlp.fc2.bias": "model-00002-of-00003.safetensors",
|
743 |
+
"model.vision_embedding.0.encoder.layers.9.mlp.fc2.weight": "model-00002-of-00003.safetensors",
|
744 |
+
"model.vision_embedding.0.encoder.layers.9.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
|
745 |
+
"model.vision_embedding.0.encoder.layers.9.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
746 |
+
"model.vision_embedding.0.encoder.layers.9.self_attn.out_proj.bias": "model-00002-of-00003.safetensors",
|
747 |
+
"model.vision_embedding.0.encoder.layers.9.self_attn.out_proj.weight": "model-00002-of-00003.safetensors",
|
748 |
+
"model.vision_embedding.0.encoder.layers.9.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
|
749 |
+
"model.vision_embedding.0.encoder.layers.9.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
750 |
+
"model.vision_embedding.0.encoder.layers.9.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
|
751 |
+
"model.vision_embedding.0.encoder.layers.9.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
752 |
+
"model.vision_embedding.0.post_layernorm.bias": "model-00002-of-00003.safetensors",
|
753 |
+
"model.vision_embedding.0.post_layernorm.weight": "model-00002-of-00003.safetensors",
|
754 |
+
"model.vision_embedding.1.weight": "model-00002-of-00003.safetensors",
|
755 |
+
"model.vision_embedding.3.weight": "model-00002-of-00003.safetensors"
|
756 |
+
}
|
757 |
+
}
|
modular_isaac.py
ADDED
@@ -0,0 +1,1496 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from __future__ import annotations
|
2 |
+
|
3 |
+
from collections import defaultdict
|
4 |
+
from typing import Any, Union, TypedDict
|
5 |
+
|
6 |
+
import math
|
7 |
+
import numpy as np
|
8 |
+
import torch
|
9 |
+
import torch.nn as nn
|
10 |
+
import torch.nn.functional as F
|
11 |
+
import PIL.Image
|
12 |
+
|
13 |
+
|
14 |
+
from transformers import (
|
15 |
+
AutoTokenizer,
|
16 |
+
BatchFeature,
|
17 |
+
Qwen3Config,
|
18 |
+
Qwen3ForCausalLM,
|
19 |
+
Qwen3PreTrainedModel,
|
20 |
+
)
|
21 |
+
from transformers.generation.utils import GenerationMixin
|
22 |
+
from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
|
23 |
+
from transformers.models.qwen3.modeling_qwen3 import Qwen3DecoderLayer, Qwen3Model
|
24 |
+
from transformers.processing_utils import ProcessorMixin
|
25 |
+
from transformers.tokenization_utils import TensorType
|
26 |
+
import re
|
27 |
+
|
28 |
+
from transformers.models.siglip2.modeling_siglip2 import (
|
29 |
+
Siglip2MLP,
|
30 |
+
)
|
31 |
+
from transformers.models.siglip2.configuration_siglip2 import Siglip2VisionConfig
|
32 |
+
from perceptron.tensorstream import (
|
33 |
+
Event,
|
34 |
+
Stream,
|
35 |
+
TensorStream,
|
36 |
+
TextType,
|
37 |
+
VisionType,
|
38 |
+
create_stream,
|
39 |
+
group_streams,
|
40 |
+
)
|
41 |
+
from perceptron.tensorstream.ops import (
|
42 |
+
compute_mrope_pos_tensor,
|
43 |
+
modality_mask,
|
44 |
+
reconstruct_tensor_stream_from_compact_dict,
|
45 |
+
slice as ts_slice,
|
46 |
+
tensor_stream_token_view,
|
47 |
+
)
|
48 |
+
|
49 |
+
|
50 |
+
class PixelShuffleSiglip2VisionConfig(Siglip2VisionConfig):
|
51 |
+
"""Vision configuration for Isaac with Pixel Shuffle support.
|
52 |
+
|
53 |
+
Extends Siglip2VisionConfig with additional fields for pixel shuffle.
|
54 |
+
"""
|
55 |
+
|
56 |
+
model_type = "pixel_shuffle_siglip2"
|
57 |
+
base_config_key = "vision_config"
|
58 |
+
|
59 |
+
def __init__(
|
60 |
+
self,
|
61 |
+
pixel_shuffle_scale_factor: int = 1,
|
62 |
+
num_patches: int = 256,
|
63 |
+
**kwargs,
|
64 |
+
):
|
65 |
+
# Call parent with all vision config parameters
|
66 |
+
super().__init__(**kwargs)
|
67 |
+
|
68 |
+
# Add our custom fields
|
69 |
+
self.pixel_shuffle_scale_factor = pixel_shuffle_scale_factor
|
70 |
+
self.num_patches = num_patches
|
71 |
+
|
72 |
+
|
73 |
+
def create_cumulative_seq_lengths(seq_sizes: torch.Tensor, device: torch.device) -> tuple[torch.Tensor, int]:
|
74 |
+
"""Create cumulative sequence lengths for variable-length attention."""
|
75 |
+
cu_seqlens = torch.zeros(len(seq_sizes) + 1, dtype=torch.int32, device=device)
|
76 |
+
cu_seqlens[1:] = seq_sizes.cumsum(0)
|
77 |
+
max_seqlen = int(seq_sizes.max().item()) if len(seq_sizes) > 0 else 0
|
78 |
+
return cu_seqlens, max_seqlen
|
79 |
+
|
80 |
+
|
81 |
+
class PixelShuffleSiglip2VisionConfig(Siglip2VisionConfig):
|
82 |
+
"""Vision configuration for Isaac with Pixel Shuffle support.
|
83 |
+
|
84 |
+
Extends Siglip2VisionConfig with additional fields for pixel shuffle.
|
85 |
+
"""
|
86 |
+
|
87 |
+
model_type = "pixel_shuffle_siglip2"
|
88 |
+
base_config_key = "vision_config"
|
89 |
+
|
90 |
+
def __init__(
|
91 |
+
self,
|
92 |
+
pixel_shuffle_scale_factor: int = 1,
|
93 |
+
num_patches: int = 256,
|
94 |
+
**kwargs,
|
95 |
+
):
|
96 |
+
# Call parent with all vision config parameters
|
97 |
+
super().__init__(**kwargs)
|
98 |
+
|
99 |
+
# Add our custom fields
|
100 |
+
self.pixel_shuffle_scale_factor = pixel_shuffle_scale_factor
|
101 |
+
self.num_patches = num_patches
|
102 |
+
|
103 |
+
|
104 |
+
def create_cumulative_seq_lengths(seq_sizes: torch.Tensor, device: torch.device) -> tuple[torch.Tensor, int]:
|
105 |
+
"""Create cumulative sequence lengths for variable-length attention."""
|
106 |
+
cu_seqlens = torch.zeros(len(seq_sizes) + 1, dtype=torch.int32, device=device)
|
107 |
+
cu_seqlens[1:] = seq_sizes.cumsum(0)
|
108 |
+
max_seqlen = int(seq_sizes.max().item()) if len(seq_sizes) > 0 else 0
|
109 |
+
return cu_seqlens, max_seqlen
|
110 |
+
|
111 |
+
|
112 |
+
class Siglip2VariableSequenceEmbeddings(nn.Module):
|
113 |
+
def __init__(self, config: PixelShuffleSiglip2VisionConfig):
|
114 |
+
super().__init__()
|
115 |
+
self.config = config
|
116 |
+
self.embed_dim = config.hidden_size
|
117 |
+
self.patch_size = config.patch_size
|
118 |
+
|
119 |
+
self.patch_embedding = nn.Linear(
|
120 |
+
in_features=config.num_channels * self.patch_size * self.patch_size,
|
121 |
+
out_features=self.embed_dim,
|
122 |
+
)
|
123 |
+
|
124 |
+
self.num_patches = config.num_patches
|
125 |
+
self.position_embedding_size = int(self.num_patches**0.5)
|
126 |
+
self.position_embedding = nn.Embedding(self.num_patches, self.embed_dim)
|
127 |
+
|
128 |
+
def positional_embeddings(
|
129 |
+
self, packed_seq_patches: tuple[torch.Tensor, torch.Tensor, torch.Tensor]
|
130 |
+
) -> torch.Tensor:
|
131 |
+
# Prepare positional embeddings grid: (1, embed_dim, h, w)
|
132 |
+
positional_embeddings = (
|
133 |
+
self.position_embedding.weight.reshape(self.position_embedding_size, self.position_embedding_size, -1)
|
134 |
+
.permute(2, 0, 1)
|
135 |
+
.unsqueeze(0)
|
136 |
+
)
|
137 |
+
|
138 |
+
_seq_patches, _seq_sizes, spatial_shapes = packed_seq_patches
|
139 |
+
pos_embeds_list = []
|
140 |
+
mode = "bilinear"
|
141 |
+
align_corners = False
|
142 |
+
antialias = True
|
143 |
+
for spatial_shape in spatial_shapes:
|
144 |
+
height, width = spatial_shape
|
145 |
+
# Guard to ensure height and width are positive for torch.compile
|
146 |
+
if height > 0 and width > 0:
|
147 |
+
resized_pos_embed = F.interpolate(
|
148 |
+
positional_embeddings,
|
149 |
+
size=(height, width),
|
150 |
+
mode=mode,
|
151 |
+
align_corners=align_corners,
|
152 |
+
antialias=antialias,
|
153 |
+
)
|
154 |
+
# Reshape from (1, embed_dim, height, width) to (height*width, embed_dim)
|
155 |
+
resized_pos_embed = resized_pos_embed.reshape(self.embed_dim, height * width).transpose(0, 1)
|
156 |
+
else:
|
157 |
+
# Fallback - should never happen in practice
|
158 |
+
resized_pos_embed = positional_embeddings.reshape(
|
159 |
+
self.embed_dim, self.position_embedding_size * self.position_embedding_size
|
160 |
+
).transpose(0, 1)[: height * width]
|
161 |
+
pos_embeds_list.append(resized_pos_embed)
|
162 |
+
|
163 |
+
# Concatenate all positional embeddings along the sequence dimension
|
164 |
+
pos_embeds = torch.cat(pos_embeds_list, dim=0)
|
165 |
+
return pos_embeds
|
166 |
+
|
167 |
+
def forward(self, packed_seq_patches: tuple[torch.Tensor, torch.Tensor, torch.Tensor]):
|
168 |
+
seq_patches, _seq_sizes, _spatial_shapes = packed_seq_patches
|
169 |
+
|
170 |
+
# Apply patch embeddings
|
171 |
+
target_dtype = self.patch_embedding.weight.dtype
|
172 |
+
patch_embeds = self.patch_embedding(seq_patches.to(dtype=target_dtype))
|
173 |
+
pos_embeds = self.positional_embeddings(packed_seq_patches)
|
174 |
+
|
175 |
+
# Add positional embeddings to patch embeddings
|
176 |
+
embeddings = patch_embeds + pos_embeds
|
177 |
+
return embeddings
|
178 |
+
|
179 |
+
|
180 |
+
class Siglip2VariableLengthAttention(nn.Module):
|
181 |
+
"""Custom attention that supports variable-length sequences with flash attention."""
|
182 |
+
|
183 |
+
def __init__(self, config):
|
184 |
+
super().__init__()
|
185 |
+
self.config = config
|
186 |
+
self.embed_dim = config.hidden_size
|
187 |
+
self.num_heads = config.num_attention_heads
|
188 |
+
self.head_dim = self.embed_dim // self.num_heads
|
189 |
+
if self.head_dim * self.num_heads != self.embed_dim:
|
190 |
+
raise ValueError(
|
191 |
+
f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:"
|
192 |
+
f" {self.num_heads})."
|
193 |
+
)
|
194 |
+
self.scale = self.head_dim**-0.5
|
195 |
+
self.dropout = config.attention_dropout
|
196 |
+
|
197 |
+
self.k_proj = nn.Linear(self.embed_dim, self.embed_dim)
|
198 |
+
self.v_proj = nn.Linear(self.embed_dim, self.embed_dim)
|
199 |
+
self.q_proj = nn.Linear(self.embed_dim, self.embed_dim)
|
200 |
+
self.out_proj = nn.Linear(self.embed_dim, self.embed_dim)
|
201 |
+
|
202 |
+
def forward(self, hidden_states, cu_seqlens=None, max_seqlen=None):
|
203 |
+
batch_size, seq_len, _ = hidden_states.size()
|
204 |
+
|
205 |
+
# For variable-length attention, we need to reshape to (total_tokens, embed_dim)
|
206 |
+
if batch_size != 1:
|
207 |
+
raise ValueError("Variable-length attention expects batch_size=1 for packed sequences")
|
208 |
+
hidden_states = hidden_states.squeeze(0) # Remove batch dimension: (seq_len, embed_dim)
|
209 |
+
|
210 |
+
# Store original dtype
|
211 |
+
orig_dtype = hidden_states.dtype
|
212 |
+
|
213 |
+
# 1. Linear projections
|
214 |
+
Q = self.q_proj(hidden_states) # (seq_len, embed_dim)
|
215 |
+
K = self.k_proj(hidden_states) # (seq_len, embed_dim)
|
216 |
+
V = self.v_proj(hidden_states) # (seq_len, embed_dim)
|
217 |
+
|
218 |
+
# 2. Reshape for multi-head attention: (seq_len, n_heads, head_dim)
|
219 |
+
Q = Q.view(-1, self.num_heads, self.embed_dim // self.num_heads)
|
220 |
+
K = K.view(-1, self.num_heads, self.embed_dim // self.num_heads)
|
221 |
+
V = V.view(-1, self.num_heads, self.embed_dim // self.num_heads)
|
222 |
+
|
223 |
+
# 3. Apply variable-length attention using flash attention
|
224 |
+
attn_output, _, _, _, _ = torch.ops.aten._flash_attention_forward(
|
225 |
+
query=Q,
|
226 |
+
key=K,
|
227 |
+
value=V,
|
228 |
+
cum_seq_q=cu_seqlens,
|
229 |
+
cum_seq_k=cu_seqlens,
|
230 |
+
max_q=max_seqlen,
|
231 |
+
max_k=max_seqlen,
|
232 |
+
dropout_p=self.dropout if self.training else 0.0,
|
233 |
+
is_causal=False,
|
234 |
+
return_debug_mask=False,
|
235 |
+
scale=self.scale,
|
236 |
+
window_size_left=-1,
|
237 |
+
window_size_right=-1,
|
238 |
+
alibi_slopes=None,
|
239 |
+
)
|
240 |
+
|
241 |
+
# 4. Reshape attention output from (seq_len, n_heads, head_dim) to (seq_len, embed_dim)
|
242 |
+
attn_output = attn_output.reshape(seq_len, self.embed_dim)
|
243 |
+
|
244 |
+
# 5. Convert back to original dtype if needed
|
245 |
+
if attn_output.dtype != orig_dtype:
|
246 |
+
attn_output = attn_output.to(orig_dtype)
|
247 |
+
|
248 |
+
# 6. Project output
|
249 |
+
attn_output = self.out_proj(attn_output) # (seq_len, embed_dim)
|
250 |
+
|
251 |
+
# 7. Add back batch dimension for compatibility
|
252 |
+
attn_output = attn_output.unsqueeze(0) # (1, seq_len, embed_dim)
|
253 |
+
|
254 |
+
return attn_output, None
|
255 |
+
|
256 |
+
|
257 |
+
class IsaacSiglip2EncoderLayer(nn.Module):
|
258 |
+
"""Siglip2 encoder layer with variable-length attention."""
|
259 |
+
|
260 |
+
def __init__(self, config: PixelShuffleSiglip2VisionConfig):
|
261 |
+
super().__init__()
|
262 |
+
self.embed_dim = config.hidden_size
|
263 |
+
self.self_attn = Siglip2VariableLengthAttention(config)
|
264 |
+
|
265 |
+
self.layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
|
266 |
+
self.mlp = Siglip2MLP(config) # Use HF's Siglip2MLP
|
267 |
+
self.layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
|
268 |
+
|
269 |
+
def forward(
|
270 |
+
self,
|
271 |
+
hidden_states: torch.Tensor,
|
272 |
+
cu_seqlens: torch.Tensor = None,
|
273 |
+
max_seqlen: int = None,
|
274 |
+
) -> tuple[torch.FloatTensor]:
|
275 |
+
residual = hidden_states
|
276 |
+
|
277 |
+
hidden_states = self.layer_norm1(hidden_states)
|
278 |
+
|
279 |
+
hidden_states, attn_weights = self.self_attn(
|
280 |
+
hidden_states=hidden_states,
|
281 |
+
cu_seqlens=cu_seqlens,
|
282 |
+
max_seqlen=max_seqlen,
|
283 |
+
)
|
284 |
+
|
285 |
+
hidden_states = residual + hidden_states
|
286 |
+
|
287 |
+
residual = hidden_states
|
288 |
+
hidden_states = self.layer_norm2(hidden_states)
|
289 |
+
hidden_states = self.mlp(hidden_states)
|
290 |
+
hidden_states = residual + hidden_states
|
291 |
+
|
292 |
+
return (hidden_states,)
|
293 |
+
|
294 |
+
|
295 |
+
class IsaacEncoder(nn.Module):
|
296 |
+
"""Encoder using Isaac encoder layers with variable-length attention support."""
|
297 |
+
|
298 |
+
def __init__(self, config: PixelShuffleSiglip2VisionConfig):
|
299 |
+
super().__init__()
|
300 |
+
self.config = config
|
301 |
+
self.layers = nn.ModuleList([IsaacSiglip2EncoderLayer(config) for _ in range(config.num_hidden_layers)])
|
302 |
+
|
303 |
+
def forward(
|
304 |
+
self,
|
305 |
+
inputs_embeds,
|
306 |
+
cu_seqlens: torch.Tensor | None = None,
|
307 |
+
max_seqlen: int | None = None,
|
308 |
+
output_hidden_states: bool = False,
|
309 |
+
):
|
310 |
+
all_hidden_states = () if output_hidden_states else None
|
311 |
+
|
312 |
+
hidden_states = inputs_embeds
|
313 |
+
|
314 |
+
for encoder_layer in self.layers:
|
315 |
+
if output_hidden_states:
|
316 |
+
all_hidden_states = all_hidden_states + (hidden_states,)
|
317 |
+
|
318 |
+
layer_outputs = encoder_layer(
|
319 |
+
hidden_states,
|
320 |
+
cu_seqlens,
|
321 |
+
max_seqlen,
|
322 |
+
)
|
323 |
+
|
324 |
+
hidden_states = layer_outputs[0]
|
325 |
+
|
326 |
+
if output_hidden_states:
|
327 |
+
all_hidden_states = all_hidden_states + (hidden_states,)
|
328 |
+
|
329 |
+
return hidden_states, all_hidden_states, None
|
330 |
+
|
331 |
+
|
332 |
+
def create_pixel_shuffle_index_map(
|
333 |
+
seq_sizes: torch.Tensor,
|
334 |
+
token_grids: torch.Tensor,
|
335 |
+
scale_factor: int = 1,
|
336 |
+
device: torch.device | None = None,
|
337 |
+
) -> torch.Tensor:
|
338 |
+
"""
|
339 |
+
Build a gather-index map that tells us, for every *output* token after
|
340 |
+
pixel-shuffle, which `scale_factor**2` *input* tokens are being merged.
|
341 |
+
|
342 |
+
Args
|
343 |
+
----
|
344 |
+
seq_sizes : (num_images,) - #patches in each image (row-major order)
|
345 |
+
token_grids : (num_images,2) - (height, width) for every image
|
346 |
+
scale_factor : spatial down-scale factor (≥2)
|
347 |
+
device : (optional) overrides `seq_sizes.device`
|
348 |
+
|
349 |
+
Returns
|
350 |
+
-------
|
351 |
+
gather_idx : (new_total_seq_len, scale_factor**2) int64 tensor.
|
352 |
+
gather_idx[i, j] is the *flat* index into the *original*
|
353 |
+
packed sequence for the j-th sub-patch that forms the
|
354 |
+
i-th output token.
|
355 |
+
"""
|
356 |
+
if device is None:
|
357 |
+
device = seq_sizes.device
|
358 |
+
|
359 |
+
r = int(scale_factor)
|
360 |
+
if r < 2:
|
361 |
+
raise ValueError("`scale_factor` must be ≥ 2")
|
362 |
+
|
363 |
+
# Safety: all spatial dims must be divisible by r
|
364 |
+
# Cannot run under torch compile fullgraph mode hence
|
365 |
+
if not torch.compiler.is_compiling():
|
366 |
+
if not ((token_grids[:, 0] % r == 0).all() and (token_grids[:, 1] % r == 0).all()):
|
367 |
+
raise AssertionError(
|
368 |
+
f"Every (H,W) in `token_grids` must be divisible by scale_factor={r}, got {token_grids.tolist()}"
|
369 |
+
)
|
370 |
+
|
371 |
+
gather_chunks: list[torch.Tensor] = []
|
372 |
+
tok_offset = 0
|
373 |
+
|
374 |
+
for seq_len, (h, w) in zip(seq_sizes.tolist(), token_grids.tolist(), strict=False):
|
375 |
+
# Build the (H, W) grid of flat indices for this image
|
376 |
+
grid = torch.arange(seq_len, device=device, dtype=torch.int64) + tok_offset
|
377 |
+
grid = grid.view(h, w) # (H, W)
|
378 |
+
|
379 |
+
# -------- identical ordering to your fixed-res routine --------
|
380 |
+
# Step 1: split width into blocks of r
|
381 |
+
grid = grid.view(h, w // r, r) # (H, W/r, r)
|
382 |
+
# Step 2: now split height into blocks of r
|
383 |
+
grid = grid.view(h // r, r, w // r, r) # (H/r, r, W/r, r)
|
384 |
+
# Step 3: final permutation to (H/r, W/r, r, r)
|
385 |
+
grid = grid.permute(0, 2, 1, 3).contiguous() # (H/r, W/r, r, r)
|
386 |
+
# Step 4: each (r, r) block forms one output token
|
387 |
+
gather_chunks.append(grid.reshape(-1, r * r)) # (H*W / r², r²)
|
388 |
+
|
389 |
+
tok_offset += seq_len
|
390 |
+
|
391 |
+
# Concatenate over all images in the packed batch
|
392 |
+
gather_idx = torch.cat(gather_chunks, dim=0) # (Σ_i HᵢWᵢ/r², r²)
|
393 |
+
return gather_idx
|
394 |
+
|
395 |
+
|
396 |
+
def pixel_shuffle_varlen(
|
397 |
+
x: torch.Tensor,
|
398 |
+
token_grids: torch.Tensor,
|
399 |
+
scale_factor: int = 1,
|
400 |
+
) -> torch.Tensor:
|
401 |
+
r"""Apply pixel shuffle to a packed vision sequence without unpacking per image.
|
402 |
+
|
403 |
+
Args:
|
404 |
+
x (`torch.Tensor`):
|
405 |
+
Concatenated vision embeddings. Accepts `(seq_len, hidden_size)` or `(1, seq_len, hidden_size)` shapes
|
406 |
+
produced by stacking image patches.
|
407 |
+
token_grids (`torch.Tensor`):
|
408 |
+
Integer tensor of shape `(num_images, 2)` whose rows give the `(height, width)` patch grid sizes
|
409 |
+
corresponding to each image segment inside `x`.
|
410 |
+
scale_factor (`int`, *optional*, defaults to 1):
|
411 |
+
Spatial down-sampling factor specific to pixel shuffle. Values greater than one merge `scale_factor**2` neighboring patches into a
|
412 |
+
single embedding channel-group.
|
413 |
+
|
414 |
+
Returns:
|
415 |
+
`torch.Tensor`: Pixel-shuffled embeddings with shape matching the input convention:
|
416 |
+
`(seq_len, hidden_size * scale_factor**2)` when the input was 2D, or `(1, seq_len, hidden_size * scale_factor**2)`
|
417 |
+
if the singleton batch dimension was present.
|
418 |
+
|
419 |
+
Raises:
|
420 |
+
ValueError: If more than one batch item is provided.
|
421 |
+
"""
|
422 |
+
keep_batch_dim = x.dim() == 3
|
423 |
+
if keep_batch_dim:
|
424 |
+
if x.size(0) != 1:
|
425 |
+
raise AssertionError("Packed sequence is expected to have batch_size == 1")
|
426 |
+
x_ = x.squeeze(0) # (seq, embed)
|
427 |
+
else:
|
428 |
+
x_ = x # (seq, embed)
|
429 |
+
|
430 |
+
embed_dim = x_.size(-1)
|
431 |
+
r = int(scale_factor)
|
432 |
+
|
433 |
+
# Calculate seq_sizes from token_grids
|
434 |
+
seq_sizes = torch.prod(token_grids, dim=-1)
|
435 |
+
|
436 |
+
# Build index map and gather in one go
|
437 |
+
gather_idx = create_pixel_shuffle_index_map(
|
438 |
+
seq_sizes=seq_sizes,
|
439 |
+
token_grids=token_grids,
|
440 |
+
scale_factor=r,
|
441 |
+
device=x_.device,
|
442 |
+
) # (new_seq, r²)
|
443 |
+
|
444 |
+
# Gather → (new_seq, r², embed_dim)
|
445 |
+
gathered = x_[gather_idx] # fancy indexing keeps gradient
|
446 |
+
|
447 |
+
# Merge the r² group dimension into channels to finish the shuffle
|
448 |
+
out = gathered.reshape(gathered.size(0), embed_dim * r * r)
|
449 |
+
|
450 |
+
# Restore batch dimension if needed
|
451 |
+
if keep_batch_dim:
|
452 |
+
out = out.unsqueeze(0)
|
453 |
+
return out
|
454 |
+
|
455 |
+
|
456 |
+
class Siglip2SequenceVisionTransformer(nn.Module):
|
457 |
+
def __init__(self, config: PixelShuffleSiglip2VisionConfig):
|
458 |
+
super().__init__()
|
459 |
+
self.config = config
|
460 |
+
self.embeddings = Siglip2VariableSequenceEmbeddings(config)
|
461 |
+
self.encoder = IsaacEncoder(config)
|
462 |
+
self.post_layernorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
|
463 |
+
self.pixel_shuffle_scale_factor = config.pixel_shuffle_scale_factor
|
464 |
+
|
465 |
+
def forward(self, packed_seq_patches: tuple[torch.Tensor, torch.Tensor]):
|
466 |
+
seq_patches, token_grids = packed_seq_patches
|
467 |
+
seq_sizes = torch.prod(token_grids, dim=-1)
|
468 |
+
|
469 |
+
# Get embeddings from packed sequence
|
470 |
+
hidden_states = self.embeddings((seq_patches, seq_sizes, token_grids))
|
471 |
+
|
472 |
+
# Add a pseudo batch dimension for the encoder
|
473 |
+
hidden_states = hidden_states.unsqueeze(0)
|
474 |
+
|
475 |
+
# Generate cumulative sequence lengths for variable-length attention
|
476 |
+
cu_seqlens, max_seqlen = create_cumulative_seq_lengths(seq_sizes, hidden_states.device)
|
477 |
+
|
478 |
+
# Pass through encoder with variable-length attention parameters
|
479 |
+
hidden_states, _, _ = self.encoder(
|
480 |
+
inputs_embeds=hidden_states,
|
481 |
+
cu_seqlens=cu_seqlens,
|
482 |
+
max_seqlen=max_seqlen,
|
483 |
+
)
|
484 |
+
|
485 |
+
# Apply final layer normalization
|
486 |
+
hidden_states = self.post_layernorm(hidden_states)
|
487 |
+
|
488 |
+
if self.pixel_shuffle_scale_factor > 1:
|
489 |
+
hidden_states = pixel_shuffle_varlen(
|
490 |
+
x=hidden_states,
|
491 |
+
token_grids=token_grids,
|
492 |
+
scale_factor=self.pixel_shuffle_scale_factor,
|
493 |
+
)
|
494 |
+
# Remove the pseudo batch dimension we added earlier
|
495 |
+
hidden_states = hidden_states.squeeze(0)
|
496 |
+
|
497 |
+
# Return the full sequence of embeddings
|
498 |
+
return hidden_states
|
499 |
+
|
500 |
+
|
501 |
+
# ============================================================================
|
502 |
+
# Configuration
|
503 |
+
# ============================================================================
|
504 |
+
|
505 |
+
MAX_PIXELS = 60_000_000 # 60‑megapixel ceiling ≈ 8200 × 7300 px
|
506 |
+
|
507 |
+
# Vision preprocessing constants
|
508 |
+
VISION_MEAN = (0.5, 0.5, 0.5)
|
509 |
+
VISION_STD = (0.5, 0.5, 0.5)
|
510 |
+
VISION_SCALE = 1 / 255
|
511 |
+
|
512 |
+
|
513 |
+
def _make_writeable(arr: np.ndarray) -> np.ndarray:
|
514 |
+
"""Return *arr* itself if it is already writeable, otherwise try to flip the
|
515 |
+
write flag in-place and finally fall back to `arr.copy()`.
|
516 |
+
This guarantees the buffer handed to `torch.from_numpy()` is always
|
517 |
+
writeable, silencing the PyTorch warning about undefined behaviour.
|
518 |
+
"""
|
519 |
+
if arr.flags.writeable:
|
520 |
+
return arr
|
521 |
+
|
522 |
+
# First, try the cheap path — in‑place flag toggle (works for mmap'd arrays
|
523 |
+
# and some shared memory buffers):
|
524 |
+
try:
|
525 |
+
arr.setflags(write=True)
|
526 |
+
return arr # success: no data copy
|
527 |
+
except ValueError:
|
528 |
+
# Buffer is inherently read‑only (e.g. backed by PyAV / PIL): make copy
|
529 |
+
return arr.copy()
|
530 |
+
|
531 |
+
|
532 |
+
def extract_image_pil(image: PIL.Image.Image) -> torch.Tensor | None:
|
533 |
+
if image.width * image.height > MAX_PIXELS:
|
534 |
+
raise ValueError(f"Image (w={image.width}, h={image.height}) > MAX=`{MAX_PIXELS}`")
|
535 |
+
img = image if image.mode == "RGB" else image.convert("RGB")
|
536 |
+
arr = np.asarray(img)
|
537 |
+
arr = _make_writeable(arr)
|
538 |
+
return torch.from_numpy(arr)
|
539 |
+
|
540 |
+
|
541 |
+
def get_image_size_for_max_num_patches(
|
542 |
+
image_height: int,
|
543 |
+
image_width: int,
|
544 |
+
patch_size: int,
|
545 |
+
max_num_patches: int,
|
546 |
+
min_num_patches: int | None = None,
|
547 |
+
eps: float = 1e-5,
|
548 |
+
pixel_shuffle_scale: int = 1,
|
549 |
+
) -> tuple[int, int]:
|
550 |
+
r"""Compute a target resolution whose patch grid satisfies patching parametrization.
|
551 |
+
|
552 |
+
Args:
|
553 |
+
image_height (`int`):
|
554 |
+
Height in pixels of the source image prior to any resizing.
|
555 |
+
image_width (`int`):
|
556 |
+
Width in pixels of the source image prior to any resizing.
|
557 |
+
patch_size (`int`):
|
558 |
+
Size of the square patch used by the vision encoder.
|
559 |
+
max_num_patches (`int`):
|
560 |
+
Upper bound on `(height / patch_size) * (width / patch_size)` after resizing.
|
561 |
+
min_num_patches (`int`, *optional*):
|
562 |
+
Lower bound on the number of patches. When provided the image will be scaled up if necessary.
|
563 |
+
eps (`float`, *optional*, defaults to 1e-5):
|
564 |
+
Convergence tolerance for the internal binary search to determing the target dimensions.
|
565 |
+
pixel_shuffle_scale (`int`, *optional*, defaults to 1):
|
566 |
+
Additional stride multiplier applied when pixel shuffle later reduces spatial resolution.
|
567 |
+
|
568 |
+
Returns:
|
569 |
+
`tuple[int, int]`: Height and width (in pixels) that are multiples of `patch_size * pixel_shuffle_scale`
|
570 |
+
and respect both the maximum and optional minimum patch-count constraints.
|
571 |
+
"""
|
572 |
+
|
573 |
+
def get_scaled_image_size(scale, original_size, patch_size, pixel_shuffle_scale):
|
574 |
+
scaled_size = scale * original_size
|
575 |
+
divisor = patch_size * pixel_shuffle_scale
|
576 |
+
scaled_size = math.ceil(scaled_size / divisor) * divisor
|
577 |
+
scaled_size = max(divisor, scaled_size)
|
578 |
+
return int(scaled_size)
|
579 |
+
|
580 |
+
# Ensure divisibility
|
581 |
+
divisor = patch_size * pixel_shuffle_scale
|
582 |
+
adjusted_height = math.ceil(image_height / divisor) * divisor
|
583 |
+
adjusted_height = max(divisor, adjusted_height)
|
584 |
+
adjusted_width = math.ceil(image_width / divisor) * divisor
|
585 |
+
adjusted_width = max(divisor, adjusted_width)
|
586 |
+
|
587 |
+
num_patches = (adjusted_height / patch_size) * (adjusted_width / patch_size)
|
588 |
+
|
589 |
+
if min_num_patches is not None and num_patches < min_num_patches:
|
590 |
+
# Scale up
|
591 |
+
scale_min, scale_max = 1.0, 100.0
|
592 |
+
while (scale_max - scale_min) >= eps:
|
593 |
+
scale = (scale_min + scale_max) / 2
|
594 |
+
target_height = get_scaled_image_size(scale, image_height, patch_size, pixel_shuffle_scale)
|
595 |
+
target_width = get_scaled_image_size(scale, image_width, patch_size, pixel_shuffle_scale)
|
596 |
+
num_patches = (target_height / patch_size) * (target_width / patch_size)
|
597 |
+
if num_patches >= min_num_patches:
|
598 |
+
scale_max = scale
|
599 |
+
else:
|
600 |
+
scale_min = scale
|
601 |
+
scale = scale_max
|
602 |
+
target_height = get_scaled_image_size(scale, image_height, patch_size, pixel_shuffle_scale)
|
603 |
+
target_width = get_scaled_image_size(scale, image_width, patch_size, pixel_shuffle_scale)
|
604 |
+
return target_height, target_width
|
605 |
+
elif num_patches <= max_num_patches:
|
606 |
+
return adjusted_height, adjusted_width
|
607 |
+
else:
|
608 |
+
# Scale down
|
609 |
+
scale_min, scale_max = eps / 10, 1.0
|
610 |
+
while (scale_max - scale_min) >= eps:
|
611 |
+
scale = (scale_min + scale_max) / 2
|
612 |
+
target_height = get_scaled_image_size(scale, image_height, patch_size, pixel_shuffle_scale)
|
613 |
+
target_width = get_scaled_image_size(scale, image_width, patch_size, pixel_shuffle_scale)
|
614 |
+
num_patches = (target_height / patch_size) * (target_width / patch_size)
|
615 |
+
if num_patches <= max_num_patches:
|
616 |
+
scale_min = scale
|
617 |
+
else:
|
618 |
+
scale_max = scale
|
619 |
+
scale = scale_min
|
620 |
+
target_height = get_scaled_image_size(scale, image_height, patch_size, pixel_shuffle_scale)
|
621 |
+
target_width = get_scaled_image_size(scale, image_width, patch_size, pixel_shuffle_scale)
|
622 |
+
return target_height, target_width
|
623 |
+
|
624 |
+
|
625 |
+
_MEAN_TENSOR = torch.tensor(VISION_MEAN, dtype=torch.float32).view(1, 1, 1, -1)
|
626 |
+
_STD_TENSOR = torch.tensor(VISION_STD, dtype=torch.float32).view(1, 1, 1, -1)
|
627 |
+
|
628 |
+
|
629 |
+
def prepare_image_tensor(
|
630 |
+
image: torch.Tensor,
|
631 |
+
scale: float = VISION_SCALE,
|
632 |
+
) -> torch.Tensor:
|
633 |
+
r"""Standardize RGB images prior to patch extraction via rescaling and whitening.
|
634 |
+
|
635 |
+
Args:
|
636 |
+
image (`torch.Tensor`):
|
637 |
+
Tensor with shape `(..., height, width, 3)` containing RGB values. The tensor is converted to floating
|
638 |
+
point if needed.
|
639 |
+
scale (`float`, *optional*, defaults to `VISION_SCALE`):
|
640 |
+
Scalar multiplier applied before normalization.
|
641 |
+
Returns:
|
642 |
+
`torch.Tensor`: Normalized tensor with the same shape as the input and dtype `torch.float32`.
|
643 |
+
"""
|
644 |
+
if not torch.is_floating_point(image):
|
645 |
+
image = image.float()
|
646 |
+
rescaled = image * scale
|
647 |
+
|
648 |
+
# Use precomputed tensors and move to the correct device if needed
|
649 |
+
mean_tensor = _MEAN_TENSOR.to(image.device)
|
650 |
+
std_tensor = _STD_TENSOR.to(image.device)
|
651 |
+
|
652 |
+
normalized = (rescaled - mean_tensor) / std_tensor
|
653 |
+
return normalized
|
654 |
+
|
655 |
+
|
656 |
+
def patchify_vision(image: torch.Tensor, patch_size: int) -> torch.Tensor:
|
657 |
+
r"""Convert normalized images into flattened ViT-style patches.
|
658 |
+
|
659 |
+
Args:
|
660 |
+
image (`torch.Tensor`):
|
661 |
+
Tensor of shape `(num_images, height, width, channels)`.
|
662 |
+
patch_size (`int`):
|
663 |
+
Edge length of the square patches
|
664 |
+
|
665 |
+
Returns:
|
666 |
+
`torch.Tensor`:
|
667 |
+
Patch tensor where each position stores the flattened pixels belonging to that patch.
|
668 |
+
|
669 |
+
Raises:
|
670 |
+
ValueError: If `height` or `width` is not divisible by `patch_size`.
|
671 |
+
"""
|
672 |
+
num_images, height, width, channels = image.shape
|
673 |
+
if height % patch_size or width % patch_size:
|
674 |
+
raise ValueError(f"Dimensions of images {image.shape} are not divisible by patch_size={patch_size}.")
|
675 |
+
patches = image.reshape(num_images, height // patch_size, patch_size, width // patch_size, patch_size, channels)
|
676 |
+
patches = patches.permute(0, 1, 3, 2, 4, 5)
|
677 |
+
patches = patches.reshape(num_images, height // patch_size, width // patch_size, channels * patch_size * patch_size)
|
678 |
+
return patches
|
679 |
+
|
680 |
+
|
681 |
+
def process_vision_for_patches(
|
682 |
+
images: torch.Tensor,
|
683 |
+
patch_size: int,
|
684 |
+
max_num_patches: int,
|
685 |
+
min_num_patches: int | None = None,
|
686 |
+
pixel_shuffle_scale: int = 1,
|
687 |
+
) -> tuple[torch.Tensor, list[int]]:
|
688 |
+
r"""Resize, normalize, and patchify RGB images for the vision encoder.
|
689 |
+
|
690 |
+
Args:
|
691 |
+
images (`torch.Tensor`):
|
692 |
+
Either `(height, width, channels)` for a single image or `(num_images, height, width, channels)` for a
|
693 |
+
batch. Channels are expected to be RGB.
|
694 |
+
patch_size (`int`):
|
695 |
+
Edge length of square patches; implictly controls resize grid granularity.
|
696 |
+
max_num_patches (`int`):
|
697 |
+
Maximum number of patches allowed after resizing.
|
698 |
+
min_num_patches (`int`, *optional*):
|
699 |
+
Minimum number of patches. If provided, the routine upsamples images as needed to satisfy the lower bound.
|
700 |
+
pixel_shuffle_scale (`int`, *optional*, defaults to 1):
|
701 |
+
pixel shuffle scale factor; influences the target grid that the function produces.
|
702 |
+
|
703 |
+
Returns:
|
704 |
+
`tuple[torch.Tensor, list[int]]`: A pair `(patches, dims_virtual)` where `patches` has shape
|
705 |
+
`(num_images, target_h / patch_size, target_w / patch_size, channels * patch_size**2)` and `dims_virtual`
|
706 |
+
encodes effective `(images, height, width)` dimensions after optional pixel shuffling.
|
707 |
+
"""
|
708 |
+
# Add batch dim if single image
|
709 |
+
if images.dim() == 3:
|
710 |
+
images = images.unsqueeze(0)
|
711 |
+
|
712 |
+
# Permute to channel first for resize
|
713 |
+
images = images.permute(0, 3, 1, 2)
|
714 |
+
|
715 |
+
# Get target dimensions
|
716 |
+
_, _, orig_height, orig_width = images.shape
|
717 |
+
target_height, target_width = get_image_size_for_max_num_patches(
|
718 |
+
orig_height,
|
719 |
+
orig_width,
|
720 |
+
patch_size,
|
721 |
+
max_num_patches,
|
722 |
+
min_num_patches=min_num_patches,
|
723 |
+
pixel_shuffle_scale=pixel_shuffle_scale,
|
724 |
+
)
|
725 |
+
|
726 |
+
# Resize
|
727 |
+
images = F.interpolate(
|
728 |
+
images,
|
729 |
+
size=(target_height, target_width),
|
730 |
+
mode="bilinear",
|
731 |
+
align_corners=False,
|
732 |
+
)
|
733 |
+
|
734 |
+
# Back to channel last
|
735 |
+
images = images.permute(0, 2, 3, 1)
|
736 |
+
|
737 |
+
# Normalize
|
738 |
+
images = prepare_image_tensor(images)
|
739 |
+
|
740 |
+
# Patchify
|
741 |
+
patches = patchify_vision(images, patch_size=patch_size)
|
742 |
+
|
743 |
+
# Calculate dimensions for the patches
|
744 |
+
n_images, h_patches, w_patches, _ = patches.shape
|
745 |
+
dims_virtual = (
|
746 |
+
[1, h_patches, w_patches]
|
747 |
+
if pixel_shuffle_scale == 1
|
748 |
+
else [1, h_patches // pixel_shuffle_scale, w_patches // pixel_shuffle_scale]
|
749 |
+
)
|
750 |
+
|
751 |
+
return patches, dims_virtual
|
752 |
+
|
753 |
+
|
754 |
+
def precompute_inv_freq(theta: float, dim: int) -> torch.Tensor:
|
755 |
+
"""
|
756 |
+
Returns shape (dim//2,).
|
757 |
+
"""
|
758 |
+
inv_freq = 1.0 / (theta ** (torch.arange(0, dim, 2, dtype=torch.float32) / dim))
|
759 |
+
return inv_freq # type: ignore[return-value]
|
760 |
+
|
761 |
+
|
762 |
+
def precompute_cos_sin_3d(
|
763 |
+
position_ids: torch.Tensor, # shape (3, B, T)
|
764 |
+
inv_freq: torch.Tensor, # shape (dim//2,)
|
765 |
+
mrope_half_section: list[int], # sum to dim//2
|
766 |
+
) -> tuple[torch.Tensor, torch.Tensor]:
|
767 |
+
r"""Generate 3D rotary embeddings for multi-axis positions.
|
768 |
+
|
769 |
+
Args:
|
770 |
+
position_ids (`torch.Tensor`):
|
771 |
+
Tensor of shape `(3, batch_size, seq_len)` containing positional indices for the x/y/t axes.
|
772 |
+
inv_freq (`torch.Tensor`):
|
773 |
+
Precomputed inverse frequency vector used to derive rotary phases.
|
774 |
+
mrope_half_section (`list[int]`):
|
775 |
+
Sizes the axis-specific frequency blocks.
|
776 |
+
|
777 |
+
Returns:
|
778 |
+
`tuple[torch.Tensor, torch.Tensor]`: Cosine and sine tensors, each of shape `(batch_size, seq_len, dim)`, ready
|
779 |
+
to be passed into rotary attention layers.
|
780 |
+
"""
|
781 |
+
B = position_ids.shape[1]
|
782 |
+
T = position_ids.shape[2]
|
783 |
+
dim_half = inv_freq.shape[0]
|
784 |
+
device = position_ids.device
|
785 |
+
|
786 |
+
# Initialize with full dimension (not half) to match LLaMA
|
787 |
+
cos_3d = torch.zeros((B, T, dim_half * 2), dtype=torch.float32, device=device)
|
788 |
+
sin_3d = torch.zeros((B, T, dim_half * 2), dtype=torch.float32, device=device)
|
789 |
+
|
790 |
+
offset = 0
|
791 |
+
for d in range(3):
|
792 |
+
block_size = mrope_half_section[d]
|
793 |
+
freq_slice = inv_freq[offset : offset + block_size] # shape => (block_size,)
|
794 |
+
# shape => (B, T, block_size)
|
795 |
+
phase = position_ids[d].unsqueeze(-1).float() * freq_slice
|
796 |
+
|
797 |
+
cos_part = phase.cos()
|
798 |
+
sin_part = phase.sin()
|
799 |
+
|
800 |
+
# Duplicate values for both halves of the dimension
|
801 |
+
cos_3d[:, :, offset : offset + block_size] = cos_part
|
802 |
+
cos_3d[:, :, dim_half + offset : dim_half + offset + block_size] = cos_part
|
803 |
+
sin_3d[:, :, offset : offset + block_size] = sin_part
|
804 |
+
sin_3d[:, :, dim_half + offset : dim_half + offset + block_size] = sin_part
|
805 |
+
|
806 |
+
offset += block_size
|
807 |
+
|
808 |
+
return cos_3d, sin_3d
|
809 |
+
|
810 |
+
|
811 |
+
class RopeScaling(TypedDict, total=False):
|
812 |
+
rope_type: str
|
813 |
+
factor: float
|
814 |
+
mrope_section: list[int]
|
815 |
+
mrope_interleaved: bool
|
816 |
+
low_freq_factor: float
|
817 |
+
high_freq_factor: float
|
818 |
+
original_max_position_embeddings: int
|
819 |
+
|
820 |
+
|
821 |
+
class IsaacConfig(Qwen3Config):
|
822 |
+
"""Configuration class for Isaac multimodal model."""
|
823 |
+
|
824 |
+
model_type = "isaac"
|
825 |
+
sub_configs = {"vision_config": PixelShuffleSiglip2VisionConfig}
|
826 |
+
|
827 |
+
def __init__(
|
828 |
+
self,
|
829 |
+
vision_config=None,
|
830 |
+
vision_patch_size: int = 16,
|
831 |
+
vision_max_num_patches: int = 256,
|
832 |
+
vision_min_num_patches: int | None = None,
|
833 |
+
pixel_shuffle_scale: int = 1,
|
834 |
+
max_sequence_length: int = 16384,
|
835 |
+
vision_token: str = "<image>",
|
836 |
+
**kwargs,
|
837 |
+
):
|
838 |
+
super().__init__(**kwargs)
|
839 |
+
|
840 |
+
# Handle vision config - either dict or PixelShuffleSiglip2VisionConfig instance
|
841 |
+
if isinstance(vision_config, dict):
|
842 |
+
self.vision_config = self.sub_configs["vision_config"](**vision_config)
|
843 |
+
elif vision_config is None:
|
844 |
+
self.vision_config = self.sub_configs["vision_config"]()
|
845 |
+
else:
|
846 |
+
self.vision_config = vision_config
|
847 |
+
|
848 |
+
# EventStreamProcessor parameters (for backward compatibility)
|
849 |
+
self.video_patch_size = vision_patch_size
|
850 |
+
self.vision_max_num_patches = vision_max_num_patches
|
851 |
+
self.vision_min_num_patches = vision_min_num_patches
|
852 |
+
self.pixel_shuffle_scale = pixel_shuffle_scale
|
853 |
+
|
854 |
+
# Processing parameters
|
855 |
+
self.max_sequence_length = max_sequence_length
|
856 |
+
self.vision_token = vision_token
|
857 |
+
|
858 |
+
|
859 |
+
# ============================================================================
|
860 |
+
# Processor Components
|
861 |
+
# ============================================================================
|
862 |
+
|
863 |
+
|
864 |
+
def create_text_event(tokenizer: AutoTokenizer, text: str, time: float = 0.0) -> Event:
|
865 |
+
r"""Wrap a text into an `Event` compatible with the multimodal TensorStream.
|
866 |
+
|
867 |
+
Args:
|
868 |
+
tokenizer (`AutoTokenizer`):
|
869 |
+
Tokenizer used to convert text into model vocabulary ids.
|
870 |
+
text (`str`):
|
871 |
+
Plain-text fragment to encode.
|
872 |
+
time (`float`, *optional*, defaults to 0.0):
|
873 |
+
Timeline coordinate associated with the event. Both start and end times use the same value because text
|
874 |
+
segments are instantaneous in the scheduler.
|
875 |
+
|
876 |
+
Returns:
|
877 |
+
`Event`: Event carrying a `(num_tokens, 1)` tensor of token ids with matching
|
878 |
+
metadata so that downstream processors can compute modality-specific embeddings.
|
879 |
+
"""
|
880 |
+
tokens = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").squeeze(0)
|
881 |
+
|
882 |
+
# Calculate dimensions for the event
|
883 |
+
num_tokens = len(tokens)
|
884 |
+
dims_virtual = [num_tokens, 1] # [sequence_length, 1]
|
885 |
+
dims_real = dims_virtual.copy()
|
886 |
+
|
887 |
+
# Ensure tokens has the right shape for tensor_stream_token_view
|
888 |
+
# It expects a 2D tensor where sum(dim=-1) gives the token IDs
|
889 |
+
if tokens.dim() == 1:
|
890 |
+
tokens = tokens.unsqueeze(-1)
|
891 |
+
|
892 |
+
return Event(
|
893 |
+
data=tokens,
|
894 |
+
type=TextType.text,
|
895 |
+
time=(time, time),
|
896 |
+
dims_virtual=dims_virtual,
|
897 |
+
dims_real=dims_real,
|
898 |
+
idx_range=(0, num_tokens),
|
899 |
+
)
|
900 |
+
|
901 |
+
|
902 |
+
# ============================================================================
|
903 |
+
# Processor
|
904 |
+
# ============================================================================
|
905 |
+
|
906 |
+
|
907 |
+
class IsaacProcessor(ProcessorMixin):
|
908 |
+
attributes = []
|
909 |
+
tokenizer_class = ("AutoTokenizer",)
|
910 |
+
|
911 |
+
def __init__(
|
912 |
+
self,
|
913 |
+
tokenizer: AutoTokenizer,
|
914 |
+
config: IsaacConfig,
|
915 |
+
):
|
916 |
+
super().__init__()
|
917 |
+
self.tokenizer = tokenizer
|
918 |
+
self.config = config
|
919 |
+
|
920 |
+
# Use vision token from config
|
921 |
+
self.vision_token = config.vision_token
|
922 |
+
|
923 |
+
# Processing parameters
|
924 |
+
self.max_sequence_length = config.max_sequence_length
|
925 |
+
|
926 |
+
# Vision processing parameters
|
927 |
+
self.patch_size = config.video_patch_size
|
928 |
+
self.max_num_patches = config.vision_max_num_patches
|
929 |
+
self.min_num_patches = config.vision_min_num_patches
|
930 |
+
self.pixel_shuffle_scale = config.pixel_shuffle_scale
|
931 |
+
|
932 |
+
def apply_chat_template(
|
933 |
+
self,
|
934 |
+
messages: list[dict[str, Any]],
|
935 |
+
tokenize: bool = False,
|
936 |
+
add_generation_prompt: bool = False,
|
937 |
+
**kwargs,
|
938 |
+
) -> Any:
|
939 |
+
return self.tokenizer.apply_chat_template(
|
940 |
+
messages, tokenize=tokenize, add_generation_prompt=add_generation_prompt, **kwargs
|
941 |
+
)
|
942 |
+
|
943 |
+
def build_event_stream_simple(
|
944 |
+
self,
|
945 |
+
text: str,
|
946 |
+
images: list[PIL.Image.Image] | None = None,
|
947 |
+
) -> Stream:
|
948 |
+
events = []
|
949 |
+
# Process text and images
|
950 |
+
# Find all occurrences of vision token
|
951 |
+
|
952 |
+
pattern = re.escape(self.vision_token)
|
953 |
+
parts = re.split(f"({pattern})", text) # Keep the delimiter in the result
|
954 |
+
|
955 |
+
image_idx = 0
|
956 |
+
for current_time, part in enumerate(parts):
|
957 |
+
if part == self.vision_token:
|
958 |
+
# Replace vision token with image event
|
959 |
+
if image_idx < len(images):
|
960 |
+
# Create vision event from PIL image
|
961 |
+
image_tensor = extract_image_pil(images[image_idx])
|
962 |
+
if image_tensor is not None:
|
963 |
+
# Create a vision event with the image tensor
|
964 |
+
vision_event = Event(
|
965 |
+
data=image_tensor.unsqueeze(0), # HWC format from extract_image_pil
|
966 |
+
type=VisionType.image, # I-frame
|
967 |
+
time=(current_time, current_time),
|
968 |
+
)
|
969 |
+
events.append(vision_event)
|
970 |
+
image_idx += 1
|
971 |
+
elif part: # Non-empty text part
|
972 |
+
# tokens = self.text_processor.tokenize(part, add_special_tokens=False)
|
973 |
+
text_event = create_text_event(self.tokenizer, part, time=current_time)
|
974 |
+
events.append(text_event)
|
975 |
+
|
976 |
+
# Process vision events if any
|
977 |
+
if any(event.type == VisionType.image for event in events):
|
978 |
+
# Separate text and vision events for processing
|
979 |
+
text_events = [event for event in events if event.type == TextType.text]
|
980 |
+
vision_events = [event for event in events if event.type == VisionType.image]
|
981 |
+
|
982 |
+
# Process vision events using functional approach
|
983 |
+
processed_vision_events = []
|
984 |
+
for vision_event in vision_events:
|
985 |
+
# Process the vision data
|
986 |
+
patches, dims_virtual = process_vision_for_patches(
|
987 |
+
vision_event.data.squeeze(0), # Remove the extra dimension
|
988 |
+
patch_size=self.patch_size,
|
989 |
+
max_num_patches=self.max_num_patches,
|
990 |
+
min_num_patches=self.min_num_patches,
|
991 |
+
pixel_shuffle_scale=self.pixel_shuffle_scale,
|
992 |
+
)
|
993 |
+
|
994 |
+
# Update event with processed data
|
995 |
+
vision_event.data = patches.unsqueeze(1) # Add back frame dimension
|
996 |
+
vision_event.dims_virtual = dims_virtual
|
997 |
+
vision_event.dims_real = (
|
998 |
+
dims_virtual
|
999 |
+
if self.pixel_shuffle_scale == 1
|
1000 |
+
else [
|
1001 |
+
dims_virtual[0],
|
1002 |
+
dims_virtual[1] * self.pixel_shuffle_scale,
|
1003 |
+
dims_virtual[2] * self.pixel_shuffle_scale,
|
1004 |
+
]
|
1005 |
+
)
|
1006 |
+
vision_event.idx_range = (0, math.prod(dims_virtual))
|
1007 |
+
|
1008 |
+
# Flatten the patches
|
1009 |
+
vision_event.data = vision_event.data.reshape(-1, vision_event.data.shape[-1])
|
1010 |
+
processed_vision_events.append(vision_event)
|
1011 |
+
|
1012 |
+
events = text_events + processed_vision_events
|
1013 |
+
|
1014 |
+
# Create stream without scheduling (events already in order)
|
1015 |
+
return create_stream(events, priority=[TextType.text, VisionType.image], schedule=True)
|
1016 |
+
|
1017 |
+
def __call__(
|
1018 |
+
self,
|
1019 |
+
text: Union[str, list[str]],
|
1020 |
+
images: Union[PIL.Image.Image, list[PIL.Image.Image], None] = None,
|
1021 |
+
return_tensors: str | TensorType | None = TensorType.PYTORCH,
|
1022 |
+
**kwargs,
|
1023 |
+
) -> BatchFeature:
|
1024 |
+
"""
|
1025 |
+
Process text and images into TensorStream format.
|
1026 |
+
Args:
|
1027 |
+
text: Input text or list of texts with vision tokens
|
1028 |
+
images: PIL image or list of images (optional)
|
1029 |
+
return_tensors: Format for output tensors
|
1030 |
+
|
1031 |
+
Returns:
|
1032 |
+
BatchFeature with input_ids and tensor_stream
|
1033 |
+
"""
|
1034 |
+
# Normalize inputs to lists
|
1035 |
+
if isinstance(text, str):
|
1036 |
+
texts = [text]
|
1037 |
+
else:
|
1038 |
+
texts = text
|
1039 |
+
|
1040 |
+
if images is not None:
|
1041 |
+
if isinstance(images, PIL.Image.Image):
|
1042 |
+
images_list = [images]
|
1043 |
+
else:
|
1044 |
+
images_list = images
|
1045 |
+
else:
|
1046 |
+
images_list = None
|
1047 |
+
|
1048 |
+
if len(texts) != 1:
|
1049 |
+
raise ValueError("IsaacProcessor currently supports batch_size=1")
|
1050 |
+
if images_list is not None:
|
1051 |
+
# Count vision tokens in text to validate image count
|
1052 |
+
vision_token_count = texts[0].count(self.vision_token)
|
1053 |
+
if vision_token_count != len(images_list):
|
1054 |
+
raise ValueError(
|
1055 |
+
f"Number of {self.vision_token} tokens in text ({vision_token_count}) "
|
1056 |
+
f"must match number of images ({len(images_list)})"
|
1057 |
+
)
|
1058 |
+
|
1059 |
+
# Build event stream
|
1060 |
+
stream = self.build_event_stream_simple(
|
1061 |
+
text=texts[0],
|
1062 |
+
images=images_list,
|
1063 |
+
)
|
1064 |
+
|
1065 |
+
# Create TensorStream
|
1066 |
+
tensor_stream = TensorStream([stream])
|
1067 |
+
|
1068 |
+
# Slice to max length if needed
|
1069 |
+
_, T = tensor_stream.shape
|
1070 |
+
if T > self.max_sequence_length:
|
1071 |
+
tensor_stream = ts_slice(tensor_stream, start=T - self.max_sequence_length, end=T)
|
1072 |
+
|
1073 |
+
# Get token view
|
1074 |
+
tokens = tensor_stream_token_view(tensor_stream)
|
1075 |
+
if return_tensors in (TensorType.PYTORCH, "pt"):
|
1076 |
+
input_ids = torch.as_tensor(tokens, dtype=torch.long)
|
1077 |
+
else:
|
1078 |
+
input_ids = tokens
|
1079 |
+
|
1080 |
+
data = {
|
1081 |
+
"input_ids": input_ids,
|
1082 |
+
"tensor_stream": tensor_stream,
|
1083 |
+
}
|
1084 |
+
|
1085 |
+
return BatchFeature(data=data)
|
1086 |
+
|
1087 |
+
|
1088 |
+
# ============================================================================
|
1089 |
+
# Model
|
1090 |
+
# ============================================================================
|
1091 |
+
|
1092 |
+
|
1093 |
+
def compute_position_ids_input_ids(input_ids: torch.Tensor) -> torch.Tensor:
|
1094 |
+
r"""Create 3D positional indices for token input.
|
1095 |
+
|
1096 |
+
Args:
|
1097 |
+
input_ids (`torch.Tensor`):
|
1098 |
+
Tensor of shape `(batch_size, seq_len)` containing token ids.
|
1099 |
+
|
1100 |
+
Returns:
|
1101 |
+
`torch.Tensor`: Positional indices with shape `(batch_size, seq_len, 3)` where each channel duplicates the
|
1102 |
+
1D position so it can be consumed by the 3-axis MRoPE rotary embedding.
|
1103 |
+
"""
|
1104 |
+
batch_size, seq_length = input_ids.shape
|
1105 |
+
position_ids = torch.arange(seq_length, device=input_ids.device)
|
1106 |
+
position_ids = position_ids.view(1, -1).expand(batch_size, -1)
|
1107 |
+
position_ids = position_ids.unsqueeze(2).expand(-1, -1, 3) # Add 3D for MRoPE
|
1108 |
+
return position_ids
|
1109 |
+
|
1110 |
+
|
1111 |
+
class IsaacRotaryEmbedding(nn.Module):
|
1112 |
+
def __init__(self, config: IsaacConfig, device=None):
|
1113 |
+
super().__init__()
|
1114 |
+
|
1115 |
+
# Extract dimensions from config
|
1116 |
+
self.hidden_size = config.hidden_size
|
1117 |
+
self.num_attention_heads = config.num_attention_heads
|
1118 |
+
self.head_dim = config.head_dim
|
1119 |
+
|
1120 |
+
# Get rope_scaling config - use direct access when available
|
1121 |
+
rope_scaling = getattr(config, "rope_scaling", None) or {}
|
1122 |
+
|
1123 |
+
# Read RopeScaling parameters
|
1124 |
+
self.rope_type = rope_scaling.get("rope_type", "default")
|
1125 |
+
|
1126 |
+
self.mrope_section = [
|
1127 |
+
self.head_dim // 4, # 2x more for temporal dim
|
1128 |
+
self.head_dim // 8,
|
1129 |
+
self.head_dim // 8,
|
1130 |
+
]
|
1131 |
+
|
1132 |
+
rope_base = getattr(config, "rope_theta", 10000.0)
|
1133 |
+
inv_freq = precompute_inv_freq(rope_base, self.head_dim)
|
1134 |
+
self.register_buffer("inv_freq", inv_freq, persistent=False)
|
1135 |
+
|
1136 |
+
def forward(self, position_ids: torch.Tensor, modality_tensor: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
|
1137 |
+
with torch.no_grad():
|
1138 |
+
# Ensure non-spatial tokens have 1D rotation equivalence
|
1139 |
+
not_spatial = ~(modality_tensor == VisionType.image.value)
|
1140 |
+
# shape is [N, 1]
|
1141 |
+
data_1d = position_ids[not_spatial][..., 0].unsqueeze(-1)
|
1142 |
+
# now broadcast it from [N, 1] -> [N, D] so it matches pos[not_spatial] exactly
|
1143 |
+
data_1d = data_1d.expand(-1, position_ids.shape[-1]) # expand along the last dim
|
1144 |
+
position_ids = position_ids.clone() # Clone to avoid warning about in-place operations on expanded tensors
|
1145 |
+
position_ids[not_spatial] = data_1d
|
1146 |
+
position_ids = position_ids.permute(2, 0, 1) # pos dim first -> (3, B, L)
|
1147 |
+
cos, sin = precompute_cos_sin_3d(position_ids, self.inv_freq, self.mrope_section)
|
1148 |
+
|
1149 |
+
return cos, sin
|
1150 |
+
|
1151 |
+
|
1152 |
+
class IsaacModel(Qwen3Model):
|
1153 |
+
def __init__(self, config: IsaacConfig):
|
1154 |
+
super().__init__(config)
|
1155 |
+
self.layers = torch.nn.ModuleList(
|
1156 |
+
[Qwen3DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
|
1157 |
+
)
|
1158 |
+
self.rotary_emb = IsaacRotaryEmbedding(config, device=self.device)
|
1159 |
+
|
1160 |
+
vision_cfg = config.vision_config
|
1161 |
+
if vision_cfg is None:
|
1162 |
+
raise ValueError("IsaacConfig should always have vision_config")
|
1163 |
+
|
1164 |
+
hidden_dim = vision_cfg.hidden_size * (vision_cfg.pixel_shuffle_scale_factor**2)
|
1165 |
+
self.vision_embedding = nn.Sequential(
|
1166 |
+
Siglip2SequenceVisionTransformer(vision_cfg),
|
1167 |
+
nn.Linear(
|
1168 |
+
hidden_dim,
|
1169 |
+
4 * hidden_dim,
|
1170 |
+
bias=False,
|
1171 |
+
),
|
1172 |
+
nn.SiLU(),
|
1173 |
+
nn.Linear(4 * hidden_dim, config.hidden_size, bias=False),
|
1174 |
+
)
|
1175 |
+
|
1176 |
+
# Dispatch table for TensorStream balanced embedding (text + vision)
|
1177 |
+
self.embed_fns = {
|
1178 |
+
TextType: self.embed_text_tokens,
|
1179 |
+
VisionType: self.embed_vision,
|
1180 |
+
}
|
1181 |
+
|
1182 |
+
def embed_text_tokens(self, token_ids: torch.Tensor) -> torch.Tensor:
|
1183 |
+
"""Embed text tokens, squeezing singleton dimensions."""
|
1184 |
+
# Text events are shaped as (..., 1); squeeze the singleton index dim
|
1185 |
+
h = self.embed_tokens(token_ids)
|
1186 |
+
if h.dim() >= 2 and h.size(-2) == 1:
|
1187 |
+
h = h[..., 0, :]
|
1188 |
+
return h
|
1189 |
+
|
1190 |
+
def embed_vision(self, vision_tokens: tuple[torch.Tensor, torch.Tensor]) -> torch.Tensor:
|
1191 |
+
"""Embed vision tokens using the vision encoder."""
|
1192 |
+
# vision tokens is (seq_patches, token_grids)
|
1193 |
+
return self.vision_embedding(vision_tokens)
|
1194 |
+
|
1195 |
+
def embed_stream(self, tensor_stream: TensorStream) -> torch.Tensor:
|
1196 |
+
"""
|
1197 |
+
Embed each modality stream independently, preserving the original TensorStream
|
1198 |
+
structure.
|
1199 |
+
"""
|
1200 |
+
flat_stream = tensor_stream.flat_stream()
|
1201 |
+
per_modality_stream = group_streams(flat_stream, group_fn=lambda ev: ev.type, schedule=False)
|
1202 |
+
per_modality_compact_stream = {k: v.compact() for k, v in per_modality_stream.items()}
|
1203 |
+
|
1204 |
+
# Collect per-event grids for vision tokens (H, W like dims sans time)
|
1205 |
+
token_grids = defaultdict(list)
|
1206 |
+
for stream in tensor_stream.streams:
|
1207 |
+
for event in stream:
|
1208 |
+
token_grids[event.type].append(event.dims(virtual=False))
|
1209 |
+
|
1210 |
+
embedded_compact = {}
|
1211 |
+
for stream_type, modality_payload_tensor in per_modality_compact_stream.items():
|
1212 |
+
if stream_type.modality == VisionType:
|
1213 |
+
# Build a (N_events, 2) grid tensor with spatial dims only
|
1214 |
+
grids = token_grids.get(stream_type, [])
|
1215 |
+
if len(grids) == 0:
|
1216 |
+
input_tensor = modality_payload_tensor
|
1217 |
+
else:
|
1218 |
+
token_grids_tensor = torch.tensor(grids, dtype=torch.long, device=tensor_stream.device)[:, 1:]
|
1219 |
+
input_tensor = (modality_payload_tensor, token_grids_tensor)
|
1220 |
+
embedded_compact[stream_type] = self.embed_fns[stream_type.modality](input_tensor)
|
1221 |
+
else:
|
1222 |
+
embedded_compact[stream_type] = self.embed_fns[stream_type.modality](modality_payload_tensor)
|
1223 |
+
|
1224 |
+
# Reconstruct a TensorStream with embedded payloads and compact
|
1225 |
+
embedded_ts = reconstruct_tensor_stream_from_compact_dict(tensor_stream, embedded_compact)
|
1226 |
+
h = embedded_ts.compact() # (B, T, D)
|
1227 |
+
return h
|
1228 |
+
|
1229 |
+
def forward(
|
1230 |
+
self,
|
1231 |
+
input_ids: torch.LongTensor | None = None,
|
1232 |
+
tensor_stream: TensorStream | None = None,
|
1233 |
+
attention_mask: torch.Tensor | None = None,
|
1234 |
+
position_ids: torch.LongTensor | None = None,
|
1235 |
+
modality_tensor: torch.LongTensor | None = None,
|
1236 |
+
past_key_values: list[torch.FloatTensor] | None = None,
|
1237 |
+
inputs_embeds: torch.FloatTensor | None = None,
|
1238 |
+
use_cache: bool | None = None,
|
1239 |
+
output_hidden_states: bool | None = None,
|
1240 |
+
return_dict: bool | None = None,
|
1241 |
+
cache_position: torch.LongTensor | None = None,
|
1242 |
+
**kwargs,
|
1243 |
+
) -> tuple | BaseModelOutputWithPast:
|
1244 |
+
"""
|
1245 |
+
Forward pass with MRoPE position embeddings.
|
1246 |
+
|
1247 |
+
Computes position embeddings once and passes them through all layers.
|
1248 |
+
"""
|
1249 |
+
output_hidden_states = (
|
1250 |
+
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
1251 |
+
)
|
1252 |
+
use_cache = use_cache if use_cache is not None else self.config.use_cache
|
1253 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
1254 |
+
|
1255 |
+
# Get inputs
|
1256 |
+
if tensor_stream is not None and inputs_embeds is not None:
|
1257 |
+
raise ValueError("You cannot specify both tensor_stream and inputs_embeds")
|
1258 |
+
elif tensor_stream is not None:
|
1259 |
+
# Embed TensorStream directly
|
1260 |
+
inputs_embeds = self.embed_stream(tensor_stream)
|
1261 |
+
# Create modality tensor if not provided
|
1262 |
+
if modality_tensor is None:
|
1263 |
+
modality_tensor = modality_mask(tensor_stream)
|
1264 |
+
elif input_ids is not None and inputs_embeds is not None:
|
1265 |
+
raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
|
1266 |
+
elif input_ids is not None:
|
1267 |
+
inputs_embeds = self.embed_tokens(input_ids)
|
1268 |
+
# Create text modality tensor if not provided
|
1269 |
+
if modality_tensor is None:
|
1270 |
+
batch_size, seq_length = input_ids.shape
|
1271 |
+
modality_tensor = torch.full(
|
1272 |
+
(batch_size, seq_length), TextType.text.value, device=input_ids.device, dtype=torch.long
|
1273 |
+
)
|
1274 |
+
elif inputs_embeds is None:
|
1275 |
+
raise ValueError("You have to specify either tensor_stream, input_ids or inputs_embeds")
|
1276 |
+
|
1277 |
+
# Create default position_ids if not provided
|
1278 |
+
if position_ids is None:
|
1279 |
+
if tensor_stream is not None:
|
1280 |
+
position_ids = compute_mrope_pos_tensor(tensor_stream) # (B,L,3)
|
1281 |
+
else:
|
1282 |
+
position_ids = compute_position_ids_input_ids(input_ids)
|
1283 |
+
|
1284 |
+
# Compute MRoPE position embeddings if we have custom rotary_emb
|
1285 |
+
cos, sin = self.rotary_emb(position_ids, modality_tensor)
|
1286 |
+
cos = cos.to(inputs_embeds.dtype)
|
1287 |
+
sin = sin.to(inputs_embeds.dtype)
|
1288 |
+
|
1289 |
+
# Prepare attention mask
|
1290 |
+
if attention_mask is not None:
|
1291 |
+
attention_mask = self._update_causal_mask(
|
1292 |
+
attention_mask, inputs_embeds, cache_position, past_key_values, False
|
1293 |
+
)
|
1294 |
+
|
1295 |
+
# Initialize hidden states
|
1296 |
+
hidden_states = inputs_embeds
|
1297 |
+
|
1298 |
+
for decoder_layer in self.layers:
|
1299 |
+
layer_outputs = decoder_layer(
|
1300 |
+
hidden_states,
|
1301 |
+
attention_mask=attention_mask,
|
1302 |
+
position_ids=position_ids,
|
1303 |
+
past_key_value=past_key_values,
|
1304 |
+
use_cache=use_cache,
|
1305 |
+
cache_position=cache_position,
|
1306 |
+
position_embeddings=(cos, sin),
|
1307 |
+
**kwargs,
|
1308 |
+
)
|
1309 |
+
|
1310 |
+
hidden_states = layer_outputs[0]
|
1311 |
+
|
1312 |
+
# Final layer norm
|
1313 |
+
hidden_states = self.norm(hidden_states)
|
1314 |
+
|
1315 |
+
return BaseModelOutputWithPast(
|
1316 |
+
last_hidden_state=hidden_states,
|
1317 |
+
past_key_values=past_key_values,
|
1318 |
+
)
|
1319 |
+
|
1320 |
+
|
1321 |
+
class IsaacForConditionalGeneration(Qwen3ForCausalLM, GenerationMixin):
|
1322 |
+
"""Isaac multimodal model for conditional generation."""
|
1323 |
+
|
1324 |
+
config_class = IsaacConfig
|
1325 |
+
|
1326 |
+
def __init__(self, config: IsaacConfig):
|
1327 |
+
Qwen3PreTrainedModel.__init__(self, config)
|
1328 |
+
self.model = IsaacModel(config) # Use our custom model
|
1329 |
+
self.vocab_size = config.vocab_size
|
1330 |
+
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
|
1331 |
+
# Tracks rotary position offsets computed during a full forward pass so decode steps can reuse them.
|
1332 |
+
self.rope_deltas = None
|
1333 |
+
|
1334 |
+
self.config = config
|
1335 |
+
|
1336 |
+
def get_rope_index(
|
1337 |
+
self,
|
1338 |
+
input_ids: torch.Tensor | None,
|
1339 |
+
tensor_stream: TensorStream | None,
|
1340 |
+
attention_mask: torch.Tensor | None,
|
1341 |
+
) -> tuple[torch.Tensor, torch.Tensor]:
|
1342 |
+
"""Compute MRoPE position ids from a TensorStream (or 1D fallback).
|
1343 |
+
|
1344 |
+
Returns (position_ids, rope_deltas). position_ids is (B,L,3) for MRoPE.
|
1345 |
+
rope_deltas is (B,1) used to advance positions in decode.
|
1346 |
+
"""
|
1347 |
+
# tensor_stream present: compute 3D coords
|
1348 |
+
if tensor_stream is None and input_ids is None:
|
1349 |
+
raise ValueError("`tensor_stream` or `input_ids` must be provided to compute rope indices")
|
1350 |
+
|
1351 |
+
if tensor_stream is not None:
|
1352 |
+
pos_3d = compute_mrope_pos_tensor(tensor_stream) # (B,L,3)
|
1353 |
+
else:
|
1354 |
+
pos_3d = compute_position_ids_input_ids(input_ids)
|
1355 |
+
B, L, _ = pos_3d.shape
|
1356 |
+
|
1357 |
+
# Max position per batch across the 3 planes and sequence dimension: (B,)
|
1358 |
+
m_per_batch = pos_3d.amax(dim=(1, 2))
|
1359 |
+
|
1360 |
+
# Sequence lengths per batch: (B,)
|
1361 |
+
if attention_mask is None:
|
1362 |
+
seq_lens = torch.full_like(m_per_batch, L)
|
1363 |
+
else:
|
1364 |
+
seq_lens = attention_mask.eq(1).sum(dim=-1).to(dtype=m_per_batch.dtype, device=m_per_batch.device)
|
1365 |
+
|
1366 |
+
rope_deltas = (m_per_batch + 1 - seq_lens).to(dtype=pos_3d.dtype).unsqueeze(1)
|
1367 |
+
return pos_3d, rope_deltas
|
1368 |
+
|
1369 |
+
def forward(
|
1370 |
+
self,
|
1371 |
+
input_ids: torch.LongTensor | None = None,
|
1372 |
+
tensor_stream: TensorStream | None = None,
|
1373 |
+
attention_mask: torch.Tensor | None = None,
|
1374 |
+
position_ids: torch.LongTensor | None = None,
|
1375 |
+
past_key_values: list[torch.FloatTensor] | None = None,
|
1376 |
+
inputs_embeds: torch.FloatTensor | None = None,
|
1377 |
+
labels: torch.LongTensor | None = None,
|
1378 |
+
use_cache: bool | None = None,
|
1379 |
+
output_hidden_states: bool | None = None,
|
1380 |
+
return_dict: bool | None = None,
|
1381 |
+
cache_position: torch.LongTensor | None = None,
|
1382 |
+
**kwargs,
|
1383 |
+
) -> tuple | CausalLMOutputWithPast:
|
1384 |
+
"""
|
1385 |
+
Forward pass for conditional generation supporting both standard inputs and TensorStream.
|
1386 |
+
Uses our embed_stream approach for multimodal inputs.
|
1387 |
+
"""
|
1388 |
+
|
1389 |
+
# Don't compute embeddings here - let the model handle it
|
1390 |
+
if tensor_stream is not None:
|
1391 |
+
input_ids = None
|
1392 |
+
if input_ids is None and inputs_embeds is None and tensor_stream is None:
|
1393 |
+
raise ValueError("Either input_ids, inputs_embeds, or tensor_stream must be provided.")
|
1394 |
+
|
1395 |
+
# Build position ids (MRoPE) if needed and tensor_stream is available
|
1396 |
+
# During decode we reuse `self.rope_deltas` computed on the initial forward pass; `rope_delta` captures how far
|
1397 |
+
# cached rotary phases have progressed so we can advance `position_ids` without rebuilding the TensorStream.
|
1398 |
+
if position_ids is None and tensor_stream is not None:
|
1399 |
+
position_ids, self.rope_deltas = self.get_rope_index(input_ids, tensor_stream, attention_mask)
|
1400 |
+
elif position_ids is None and input_ids is not None:
|
1401 |
+
# For text inputs build position ids and modality tensor
|
1402 |
+
position_ids = compute_position_ids_input_ids(input_ids)
|
1403 |
+
if cache_position is not None and self.rope_deltas is not None:
|
1404 |
+
# Combine the incremental decode step (`cache_position`) with cached offsets so hidden states continue
|
1405 |
+
# rotating in lockstep across generation steps.
|
1406 |
+
rope_delta = (cache_position[0] + self.rope_deltas).to(input_ids.device)
|
1407 |
+
else:
|
1408 |
+
rope_delta = 0
|
1409 |
+
if cache_position is not None and not isinstance(rope_delta, int): # otherwise `deltas` is an int `0`
|
1410 |
+
batch_size = input_ids.shape[0]
|
1411 |
+
rope_delta = rope_delta.repeat_interleave(batch_size // rope_delta.shape[0], dim=0)
|
1412 |
+
position_ids = position_ids.add(rope_delta)
|
1413 |
+
|
1414 |
+
if tensor_stream is not None:
|
1415 |
+
modality_tensor = modality_mask(tensor_stream)
|
1416 |
+
else:
|
1417 |
+
batch_size, seq_len = input_ids.shape
|
1418 |
+
modality_tensor = torch.empty(batch_size, seq_len, device=position_ids.device).fill_(TextType.text.value)
|
1419 |
+
|
1420 |
+
outputs = self.model(
|
1421 |
+
input_ids=input_ids,
|
1422 |
+
tensor_stream=tensor_stream,
|
1423 |
+
attention_mask=attention_mask,
|
1424 |
+
position_ids=position_ids,
|
1425 |
+
modality_tensor=modality_tensor,
|
1426 |
+
past_key_values=past_key_values,
|
1427 |
+
inputs_embeds=inputs_embeds,
|
1428 |
+
use_cache=use_cache,
|
1429 |
+
output_hidden_states=output_hidden_states,
|
1430 |
+
return_dict=return_dict,
|
1431 |
+
cache_position=cache_position,
|
1432 |
+
**kwargs,
|
1433 |
+
)
|
1434 |
+
|
1435 |
+
hidden_states = outputs[0]
|
1436 |
+
logits = self.lm_head(hidden_states)
|
1437 |
+
|
1438 |
+
loss = None
|
1439 |
+
if labels is not None:
|
1440 |
+
loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size)
|
1441 |
+
|
1442 |
+
return CausalLMOutputWithPast(
|
1443 |
+
loss=loss,
|
1444 |
+
logits=logits,
|
1445 |
+
past_key_values=outputs.past_key_values,
|
1446 |
+
hidden_states=outputs.hidden_states,
|
1447 |
+
attentions=None,
|
1448 |
+
)
|
1449 |
+
|
1450 |
+
def prepare_inputs_for_generation(
|
1451 |
+
self,
|
1452 |
+
input_ids: torch.LongTensor,
|
1453 |
+
past_key_values: list[torch.FloatTensor] | None = None,
|
1454 |
+
attention_mask: torch.Tensor | None = None,
|
1455 |
+
inputs_embeds: torch.FloatTensor | None = None,
|
1456 |
+
tensor_stream: TensorStream | None = None,
|
1457 |
+
cache_position: torch.LongTensor | None = None,
|
1458 |
+
position_ids: torch.LongTensor | None = None,
|
1459 |
+
use_cache: bool = True,
|
1460 |
+
**kwargs,
|
1461 |
+
) -> dict[str, Any]:
|
1462 |
+
"""
|
1463 |
+
Prepare inputs for generation, handling TensorStream inputs properly.
|
1464 |
+
"""
|
1465 |
+
# Call parent preparation
|
1466 |
+
model_inputs = super().prepare_inputs_for_generation(
|
1467 |
+
input_ids,
|
1468 |
+
past_key_values=past_key_values,
|
1469 |
+
attention_mask=attention_mask,
|
1470 |
+
inputs_embeds=inputs_embeds,
|
1471 |
+
cache_position=cache_position,
|
1472 |
+
position_ids=position_ids,
|
1473 |
+
use_cache=use_cache,
|
1474 |
+
**kwargs,
|
1475 |
+
)
|
1476 |
+
|
1477 |
+
# Handle TensorStream for first forward pass only
|
1478 |
+
if tensor_stream is not None and (cache_position is None or cache_position[0] == 0):
|
1479 |
+
model_inputs["tensor_stream"] = tensor_stream
|
1480 |
+
# Let forward rebuild position_ids using cached deltas during decode
|
1481 |
+
model_inputs["position_ids"] = None
|
1482 |
+
# Drop tensor_stream after step 0
|
1483 |
+
if cache_position is not None and cache_position[0] != 0:
|
1484 |
+
model_inputs["tensor_stream"] = None
|
1485 |
+
return model_inputs
|
1486 |
+
|
1487 |
+
def can_generate(self) -> bool:
|
1488 |
+
return True
|
1489 |
+
|
1490 |
+
|
1491 |
+
__all__ = [
|
1492 |
+
"IsaacConfig",
|
1493 |
+
"IsaacModel",
|
1494 |
+
"IsaacForConditionalGeneration",
|
1495 |
+
"IsaacProcessor",
|
1496 |
+
]
|
special_tokens_map.json
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"<|im_start|>",
|
4 |
+
"<|im_end|>",
|
5 |
+
"<|object_ref_start|>",
|
6 |
+
"<|object_ref_end|>",
|
7 |
+
"<|box_start|>",
|
8 |
+
"<|box_end|>",
|
9 |
+
"<|quad_start|>",
|
10 |
+
"<|quad_end|>",
|
11 |
+
"<|vision_start|>",
|
12 |
+
"<|vision_end|>",
|
13 |
+
"<|vision_pad|>",
|
14 |
+
"<|image_pad|>",
|
15 |
+
"<|video_pad|>"
|
16 |
+
],
|
17 |
+
"eos_token": {
|
18 |
+
"content": "<|im_end|>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": false,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
},
|
24 |
+
"pad_token": {
|
25 |
+
"content": "<|endoftext|>",
|
26 |
+
"lstrip": false,
|
27 |
+
"normalized": false,
|
28 |
+
"rstrip": false,
|
29 |
+
"single_word": false
|
30 |
+
}
|
31 |
+
}
|
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
|
3 |
+
size 11422654
|
tokenizer_config.json
ADDED
@@ -0,0 +1,241 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"add_prefix_space": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"151643": {
|
6 |
+
"content": "<|endoftext|>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"151644": {
|
14 |
+
"content": "<|im_start|>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"151645": {
|
22 |
+
"content": "<|im_end|>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"151646": {
|
30 |
+
"content": "<|object_ref_start|>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
},
|
37 |
+
"151647": {
|
38 |
+
"content": "<|object_ref_end|>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false,
|
43 |
+
"special": true
|
44 |
+
},
|
45 |
+
"151648": {
|
46 |
+
"content": "<|box_start|>",
|
47 |
+
"lstrip": false,
|
48 |
+
"normalized": false,
|
49 |
+
"rstrip": false,
|
50 |
+
"single_word": false,
|
51 |
+
"special": true
|
52 |
+
},
|
53 |
+
"151649": {
|
54 |
+
"content": "<|box_end|>",
|
55 |
+
"lstrip": false,
|
56 |
+
"normalized": false,
|
57 |
+
"rstrip": false,
|
58 |
+
"single_word": false,
|
59 |
+
"special": true
|
60 |
+
},
|
61 |
+
"151650": {
|
62 |
+
"content": "<|quad_start|>",
|
63 |
+
"lstrip": false,
|
64 |
+
"normalized": false,
|
65 |
+
"rstrip": false,
|
66 |
+
"single_word": false,
|
67 |
+
"special": true
|
68 |
+
},
|
69 |
+
"151651": {
|
70 |
+
"content": "<|quad_end|>",
|
71 |
+
"lstrip": false,
|
72 |
+
"normalized": false,
|
73 |
+
"rstrip": false,
|
74 |
+
"single_word": false,
|
75 |
+
"special": true
|
76 |
+
},
|
77 |
+
"151652": {
|
78 |
+
"content": "<|vision_start|>",
|
79 |
+
"lstrip": false,
|
80 |
+
"normalized": false,
|
81 |
+
"rstrip": false,
|
82 |
+
"single_word": false,
|
83 |
+
"special": true
|
84 |
+
},
|
85 |
+
"151653": {
|
86 |
+
"content": "<|vision_end|>",
|
87 |
+
"lstrip": false,
|
88 |
+
"normalized": false,
|
89 |
+
"rstrip": false,
|
90 |
+
"single_word": false,
|
91 |
+
"special": true
|
92 |
+
},
|
93 |
+
"151654": {
|
94 |
+
"content": "<|vision_pad|>",
|
95 |
+
"lstrip": false,
|
96 |
+
"normalized": false,
|
97 |
+
"rstrip": false,
|
98 |
+
"single_word": false,
|
99 |
+
"special": true
|
100 |
+
},
|
101 |
+
"151655": {
|
102 |
+
"content": "<|image_pad|>",
|
103 |
+
"lstrip": false,
|
104 |
+
"normalized": false,
|
105 |
+
"rstrip": false,
|
106 |
+
"single_word": false,
|
107 |
+
"special": true
|
108 |
+
},
|
109 |
+
"151656": {
|
110 |
+
"content": "<|video_pad|>",
|
111 |
+
"lstrip": false,
|
112 |
+
"normalized": false,
|
113 |
+
"rstrip": false,
|
114 |
+
"single_word": false,
|
115 |
+
"special": true
|
116 |
+
},
|
117 |
+
"151657": {
|
118 |
+
"content": "<tool_call>",
|
119 |
+
"lstrip": false,
|
120 |
+
"normalized": false,
|
121 |
+
"rstrip": false,
|
122 |
+
"single_word": false,
|
123 |
+
"special": false
|
124 |
+
},
|
125 |
+
"151658": {
|
126 |
+
"content": "</tool_call>",
|
127 |
+
"lstrip": false,
|
128 |
+
"normalized": false,
|
129 |
+
"rstrip": false,
|
130 |
+
"single_word": false,
|
131 |
+
"special": false
|
132 |
+
},
|
133 |
+
"151659": {
|
134 |
+
"content": "<|fim_prefix|>",
|
135 |
+
"lstrip": false,
|
136 |
+
"normalized": false,
|
137 |
+
"rstrip": false,
|
138 |
+
"single_word": false,
|
139 |
+
"special": false
|
140 |
+
},
|
141 |
+
"151660": {
|
142 |
+
"content": "<|fim_middle|>",
|
143 |
+
"lstrip": false,
|
144 |
+
"normalized": false,
|
145 |
+
"rstrip": false,
|
146 |
+
"single_word": false,
|
147 |
+
"special": false
|
148 |
+
},
|
149 |
+
"151661": {
|
150 |
+
"content": "<|fim_suffix|>",
|
151 |
+
"lstrip": false,
|
152 |
+
"normalized": false,
|
153 |
+
"rstrip": false,
|
154 |
+
"single_word": false,
|
155 |
+
"special": false
|
156 |
+
},
|
157 |
+
"151662": {
|
158 |
+
"content": "<|fim_pad|>",
|
159 |
+
"lstrip": false,
|
160 |
+
"normalized": false,
|
161 |
+
"rstrip": false,
|
162 |
+
"single_word": false,
|
163 |
+
"special": false
|
164 |
+
},
|
165 |
+
"151663": {
|
166 |
+
"content": "<|repo_name|>",
|
167 |
+
"lstrip": false,
|
168 |
+
"normalized": false,
|
169 |
+
"rstrip": false,
|
170 |
+
"single_word": false,
|
171 |
+
"special": false
|
172 |
+
},
|
173 |
+
"151664": {
|
174 |
+
"content": "<|file_sep|>",
|
175 |
+
"lstrip": false,
|
176 |
+
"normalized": false,
|
177 |
+
"rstrip": false,
|
178 |
+
"single_word": false,
|
179 |
+
"special": false
|
180 |
+
},
|
181 |
+
"151665": {
|
182 |
+
"content": "<tool_response>",
|
183 |
+
"lstrip": false,
|
184 |
+
"normalized": false,
|
185 |
+
"rstrip": false,
|
186 |
+
"single_word": false,
|
187 |
+
"special": false
|
188 |
+
},
|
189 |
+
"151666": {
|
190 |
+
"content": "</tool_response>",
|
191 |
+
"lstrip": false,
|
192 |
+
"normalized": false,
|
193 |
+
"rstrip": false,
|
194 |
+
"single_word": false,
|
195 |
+
"special": false
|
196 |
+
},
|
197 |
+
"151667": {
|
198 |
+
"content": "<think>",
|
199 |
+
"lstrip": false,
|
200 |
+
"normalized": false,
|
201 |
+
"rstrip": false,
|
202 |
+
"single_word": false,
|
203 |
+
"special": false
|
204 |
+
},
|
205 |
+
"151668": {
|
206 |
+
"content": "</think>",
|
207 |
+
"lstrip": false,
|
208 |
+
"normalized": false,
|
209 |
+
"rstrip": false,
|
210 |
+
"single_word": false,
|
211 |
+
"special": false
|
212 |
+
}
|
213 |
+
},
|
214 |
+
"additional_special_tokens": [
|
215 |
+
"<|im_start|>",
|
216 |
+
"<|im_end|>",
|
217 |
+
"<|object_ref_start|>",
|
218 |
+
"<|object_ref_end|>",
|
219 |
+
"<|box_start|>",
|
220 |
+
"<|box_end|>",
|
221 |
+
"<|quad_start|>",
|
222 |
+
"<|quad_end|>",
|
223 |
+
"<|vision_start|>",
|
224 |
+
"<|vision_end|>",
|
225 |
+
"<|vision_pad|>",
|
226 |
+
"<|image_pad|>",
|
227 |
+
"<|video_pad|>"
|
228 |
+
],
|
229 |
+
"bos_token": null,
|
230 |
+
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if message.content is string %}\n {%- set content = message.content %}\n {%- else %}\n {%- set content = '' %}\n {%- endif %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- endif %}\n{%- endif %}",
|
231 |
+
"clean_up_tokenization_spaces": false,
|
232 |
+
"eos_token": "<|im_end|>",
|
233 |
+
"errors": "replace",
|
234 |
+
"extra_special_tokens": {},
|
235 |
+
"model_max_length": 131072,
|
236 |
+
"pad_token": "<|endoftext|>",
|
237 |
+
"processor_class": "Qwen2_5_VLProcessor",
|
238 |
+
"split_special_tokens": false,
|
239 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
240 |
+
"unk_token": null
|
241 |
+
}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|