Update model card for Sapiens with architecture details
Browse files
    	
        README.md
    CHANGED
    
    | @@ -3,18 +3,9 @@ language: en | |
| 3 | 
             
            license: cc-by-nc-4.0
         | 
| 4 | 
             
            ---
         | 
| 5 |  | 
| 6 | 
            -
            # Sapiens-1B- | 
| 7 |  | 
| 8 | 
            -
             | 
| 9 | 
            -
            - **Embedding Dimensions:** 1536
         | 
| 10 | 
            -
            - **Num Layers:** 40
         | 
| 11 | 
            -
            - **Num Heads:** 24
         | 
| 12 | 
            -
            - **Feedforward Channels:** 6144
         | 
| 13 | 
            -
            - **Num Parameters:** 1B
         | 
| 14 | 
            -
            - **Input Image Size:** 1024 x 1024
         | 
| 15 | 
            -
            - **Patch Size:** 16 x 16
         | 
| 16 | 
            -
             | 
| 17 | 
            -
            ## Model Details
         | 
| 18 | 
             
            Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, when finetuned for human-centric vision tasks, generalize to in-the-wild conditions.
         | 
| 19 | 
             
            Sapiens-1B natively support 1K high-resolution inference. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic.
         | 
| 20 |  | 
| @@ -25,6 +16,15 @@ Sapiens-1B natively support 1K high-resolution inference. The resulting models e | |
| 25 | 
             
            - **Format:** torchscript
         | 
| 26 | 
             
            - **File:** sapiens_1b_epoch_173_torchscript.pt2
         | 
| 27 |  | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 28 | 
             
            ### Model Sources
         | 
| 29 | 
             
            - **Repository:** [https://github.com/facebookresearch/sapiens](https://github.com/facebookresearch/sapiens)
         | 
| 30 | 
             
            - **Paper:** [https://arxiv.org/abs/2408.12569](https://arxiv.org/abs/2408.12569)
         | 
|  | |
| 3 | 
             
            license: cc-by-nc-4.0
         | 
| 4 | 
             
            ---
         | 
| 5 |  | 
| 6 | 
            +
            # Pretrain-Sapiens-1B-Torchscript
         | 
| 7 |  | 
| 8 | 
            +
            ### Model Details
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 9 | 
             
            Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, when finetuned for human-centric vision tasks, generalize to in-the-wild conditions.
         | 
| 10 | 
             
            Sapiens-1B natively support 1K high-resolution inference. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic.
         | 
| 11 |  | 
|  | |
| 16 | 
             
            - **Format:** torchscript
         | 
| 17 | 
             
            - **File:** sapiens_1b_epoch_173_torchscript.pt2
         | 
| 18 |  | 
| 19 | 
            +
            ### Model Card
         | 
| 20 | 
            +
            - **Embedding Dimensions:** 1536
         | 
| 21 | 
            +
            - **Num Layers:** 40
         | 
| 22 | 
            +
            - **Num Heads:** 24
         | 
| 23 | 
            +
            - **Feedforward Channels:** 6144
         | 
| 24 | 
            +
            - **Num Parameters:** 1B
         | 
| 25 | 
            +
            - **Input Image Size:** 1024 x 1024
         | 
| 26 | 
            +
            - **Patch Size:** 16 x 16
         | 
| 27 | 
            +
             | 
| 28 | 
             
            ### Model Sources
         | 
| 29 | 
             
            - **Repository:** [https://github.com/facebookresearch/sapiens](https://github.com/facebookresearch/sapiens)
         | 
| 30 | 
             
            - **Paper:** [https://arxiv.org/abs/2408.12569](https://arxiv.org/abs/2408.12569)
         | 
