jo-mengr's picture
Add new SentenceTransformer model
0cc8f2d verified
metadata
language:
  - code
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dense
  - generated_from_trainer
  - dataset_size:81143
  - loss:MultipleNegativesRankingLoss
base_model: NeuML/pubmedbert-base-embeddings
widget:
  - source_sentence: >-
      MALAT1 PLXDC2 FRMD4A DOCK4 CSGALNACT1 LRMDA WWOX MEF2A SLC1A3 MEF2C MED13L
      ARHGAP24 MBNL1 MAML2 HDAC9 ELMO1 SFMBT2 ANKRD44 SP100 MGAT4A ST6GAL1
      APBB1IP FKBP5 NHSL1 SORL1 SSH2 SSBP2 CHST11 SH3RF3 CHD9 ZBTB20 FHIT UVRAG
      SRGAP2 MYCBP2 CD74 ARHGAP6 SNX29 CELF2 TBC1D22A TBXAS1 MAP4K4 PICALM
      ARHGAP15 EPB41L2 FTL LPCAT2 TTC28 CPED1 DDX5 ITPR2 ARHGAP22 PSMA1 ZSWIM6
      KHDRBS3 SBF2 RAB1A FGD4 USP53 ATXN7L1 TACC1 PTPRJ FER RABGAP1L
    sentences:
      - central nervous system macrophage
      - >-
        MALAT1 PLXDC2 DOCK4 LRMDA ARHGAP24 MEF2A ANKRD44 FRMD4A SLC9A9 KCNQ3
        SRGAP2 APBB1IP SLC1A3 QKI SORL1 MBNL1 RTTN MAML2 HDAC9 TBXAS1 ST6GAL1
        PICALM SMAP2 DOCK8 MED13L LRRK1 LDLRAD4 SLC8A1 NEAT1 LINC01374 CD74
        CELF2 TAB2 PKN2 MGAT4A SNX13 RASAL2 EPB41L2 MEF2C FYB1 SIPA1L2 STAG1
        HIVEP3 FOXP2 KHDRBS3 LRCH1 BMP2K ETV6 LPAR6 TCF12 ZFHX3 SSH2 DIAPH2
        RABGAP1L CYRIB DISP1 ELMO1 RUNX1 DISC1 OXR1 CRADD ZEB2 SOCS6 CHST11
      - neuron
  - source_sentence: >-
      MALAT1 FTL EEF1A1 FTH1 TPT1 IGFBP5 TMSB4X CD74 HSPB1 PTMA APOE IFITM3 UBB
      TMSB10 HSP90AA1 MGP ITM2B EIF1 GAPDH COX7C ACTB SERF2 GATM TIMP3 ATP5F1E
      IGFBP7 NACA RNASE1 FAU JUN UBA52 SRP14 NEAT1 PEBP1 MYL6 LDHB COX4I1 UBC
      CYB5A HINT1 S100A6 MYL12A OAZ1 CRYAB HSPE1 DUSP1 SAT1 H3-3B SOD1 CXCL14
      UQCRB JUNB NDUFA4 TOMM7 RACK1 TXNIP CD59 HSP90AB1 HSPA8 PRDX1 GNG11 CD63
      ADIRF DBI
    sentences:
      - classical monocyte
      - >-
        MALAT1 TMSB4X S100A6 FTL TMSB10 EEF1A1 TIMP3 PTMA FTH1 CD74 TPT1 RNASE1
        SRP14 IFITM3 ACTB HSPB1 S100A4 MYL6 IGFBP5 SERF2 GSN APOE GLUL EMP3 EIF1
        A2M CRIP2 VIM CD59 LGALS1 DDX5 GAPDH COX7C H3-3B FAU S100A11 TM4SF1 UBB
        FOS IFITM2 NACA UBA52 RHOA DYNLL1 MYL12A TSC22D1 OAZ1 ENG EDF1 CD81
        SPARC ATP5F1E IFI6 GNG11 PODXL COX4I1 MEIS2 SERPINE2 ITM2B TXN APP CALM2
        ITGB1 UBC
      - capillary endothelial cell
  - source_sentence: >-
      MALAT1 EEF1A1 PTMA TPT1 ACTB TMSB10 LGALS1 VIM GAPDH TMSB4X RACK1 NACA
      CD74 FAU S100A4 FTH1 UBA52 CYBA S100A10 HSP90AA1 YBX1 EEF2 PFN1 H3-3B CFL1
      EIF1 HMGB1 BTF3 SH3BGRL3 PPIA HSP90AB1 UBC PABPC1 FTL FOS S100A6 GNAS
      DDIT4 TSC22D3 NPM1 ANXA2 ARPC3 SOX4 DBI GSTP1 HSPA8 HNRNPA2B1 JUND MYL6
      DDX5 SERF2 LAPTM5 ENO1 SLC25A3 PRDX1 ATP5F1E COX4I1 OAZ1 TAGLN2 NEAT1
      CHCHD2 ATP5MC2 ARPC2 CORO1A
    sentences:
      - >-
        EEF1A1 MALAT1 TPT1 LGALS1 TMSB4X TMSB10 ACTB PTMA SRGN RACK1 FAU
        HSP90AB1 CYBA GAPDH H3-3B S100A6 UBA52 MYL6 SH3BGRL3 EEF2 HMGB1 FTL OAZ1
        ARPC3 HSPA8 SOX4 SERF2 UQCRB CFL1 EIF1 S100A4 NACA CD74 PABPC1 GNAS SUB1
        RAP1B LCP1 HNRNPU SPI1 PFN1 DDX5 HNRNPA2B1 COX4I1 SARAF UBC ARPC2 CCNL1
        FTH1 IRF2BP2 BCL2 NPM1 SLC25A5 VIM VDAC3 HSP90AA1 REST PSMA7 MYL12A COPE
        EIF3A NCL VAMP8 TUBA1B
      - megakaryocyte-erythroid progenitor cell
      - plasma cell
  - source_sentence: >-
      MALAT1 EEF1A1 NR4A2 FOS H3-3B TPT1 GUK1 FNBP1 USP11 ZNF331 TAGLN2 SYTL3
      JUNB NACA PCBP2 TMSB4X FYN CD74 PABPC1 HSP90AB1 SELENOK RAP1A CCNI DUSP1
      HSPD1 FAU RANBP2 EEF2 ZEB2 NPM1 PTMA CALM1 IDS TNFRSF1B HSPA5 ARAP2 SLC2A3
      FBXW11 PPP1R15A FTL SRPK1 RIN3 PPP1R16B BEX4 OAZ1 TLE5 AKNA KPNB1 DDX5
      WSB1 C11orf58 SLC38A1 SPCS1 PRRC2C PHF3 SERP1 SRGN ADGRE5 ORMDL1 BTG1 CCNH
      CEMIP2 ATP5MC2 TMBIM6
    sentences:
      - alveolar macrophage
      - gamma-delta T cell
      - >-
        FTL FTH1 ACTB TMSB4X TMSB10 CTSD VIM SH3BGRL3 GAPDH CFL1 PSAP CSTB
        EEF1A1 SAT1 PTMA ATP5F1E CTSL DUSP1 SERF2 OAZ1 MALAT1 YBX1 LGMN TUBA1B
        MYL6 TYROBP PFN1 KLF6 TPM3 EIF4A1 MARCKS ZFP36 FABP5 FOS C1QB CALR HSPA8
        VAMP8 BRI3 NEAT1 CD63 S100A11 ANXA5 NME2 AP2S1 HSP90AB1 COTL1 SRSF3 GLUL
        CALM3 LAPTM5 ATP6V0C GSTP1 CTSZ CDC37 ARPC1B TPT1 HMGA1 ARPC2 UQCRQ
        PRELID1 HSPA5 ELOB ARPC3
  - source_sentence: >-
      MALAT1 FOS JUNB KLF6 DUSP1 EEF1A1 EIF1 ZFP36 H3-3B TPT1 PTMA TMSB4X
      TSC22D3 JUND PPP1R15A IER2 JUN IFRD1 KLF2 TNFAIP3 SF1 NFKBIA GLS BTG1 UBC
      HNRNPU EIF4A2 BTG2 EEF2 PELI1 TMSB10 RNF19A ARID4B GADD45B C12orf57 NR4A1
      SBDS SAT1 ARGLU1 MCL1 PPIP5K2 DDX21 PAFAH1B1 LUC7L JARID2 CLK1 STRAP CUL1
      SLC2A3 ACADVL ACTB HSP90AA1 GNAS DYNLL1 DDX17 XBP1 TNRC6B DICER1 NAPA DDX5
      WSB1 NFKB1 ELF2 CTSC
    sentences:
      - >-
        FTL FTH1 ACTB TMSB4X TMSB10 CTSD VIM SH3BGRL3 GAPDH CFL1 PSAP CSTB
        EEF1A1 SAT1 PTMA ATP5F1E CTSL DUSP1 SERF2 OAZ1 MALAT1 YBX1 LGMN TUBA1B
        MYL6 TYROBP PFN1 KLF6 TPM3 EIF4A1 MARCKS ZFP36 FABP5 FOS C1QB CALR HSPA8
        VAMP8 BRI3 NEAT1 CD63 S100A11 ANXA5 NME2 AP2S1 HSP90AB1 COTL1 SRSF3 GLUL
        CALM3 LAPTM5 ATP6V0C GSTP1 CTSZ CDC37 ARPC1B TPT1 HMGA1 ARPC2 UQCRQ
        PRELID1 HSPA5 ELOB ARPC3
      - alveolar macrophage
      - alpha-beta T cell
datasets:
  - jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_cell_type
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy
model-index:
  - name: SentenceTransformer based on NeuML/pubmedbert-base-embeddings
    results:
      - task:
          type: triplet
          name: Triplet
        dataset:
          name: >-
            cellxgene pseudo bulk 100k multiplets cell type cell sentence 2
            caption
          type: >-
            cellxgene_pseudo_bulk_100k_multiplets_cell_type_cell_sentence_2_caption
        metrics:
          - type: cosine_accuracy
            value: 0.7655088305473328
            name: Cosine Accuracy

SentenceTransformer based on NeuML/pubmedbert-base-embeddings

This is a sentence-transformers model finetuned from NeuML/pubmedbert-base-embeddings on the cellxgene_pseudo_bulk_100k_multiplets_cell_type_cell_sentence_2_caption dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): MMContextEncoder(
    (text_encoder): BertModel(
      (embeddings): BertEmbeddings(
        (word_embeddings): Embedding(30522, 768, padding_idx=0)
        (position_embeddings): Embedding(512, 768)
        (token_type_embeddings): Embedding(2, 768)
        (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (encoder): BertEncoder(
        (layer): ModuleList(
          (0-11): 12 x BertLayer(
            (attention): BertAttention(
              (self): BertSdpaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): BertSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): BertIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): BertOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
        )
      )
      (pooler): BertPooler(
        (dense): Linear(in_features=768, out_features=768, bias=True)
        (activation): Tanh()
      )
    )
    (text_adapter): AdapterModule(
      (net): Sequential(
        (0): Linear(in_features=768, out_features=512, bias=True)
        (1): ReLU(inplace=True)
        (2): Linear(in_features=512, out_features=1024, bias=True)
        (3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (pooling): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  )
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("jo-mengr/mmcontext-pubmedbert-cxg_100k_text_adapter")
# Run inference
sentences = [
    'MALAT1 FOS JUNB KLF6 DUSP1 EEF1A1 EIF1 ZFP36 H3-3B TPT1 PTMA TMSB4X TSC22D3 JUND PPP1R15A IER2 JUN IFRD1 KLF2 TNFAIP3 SF1 NFKBIA GLS BTG1 UBC HNRNPU EIF4A2 BTG2 EEF2 PELI1 TMSB10 RNF19A ARID4B GADD45B C12orf57 NR4A1 SBDS SAT1 ARGLU1 MCL1 PPIP5K2 DDX21 PAFAH1B1 LUC7L JARID2 CLK1 STRAP CUL1 SLC2A3 ACADVL ACTB HSP90AA1 GNAS DYNLL1 DDX17 XBP1 TNRC6B DICER1 NAPA DDX5 WSB1 NFKB1 ELF2 CTSC',
    'alpha-beta T cell',
    'alveolar macrophage',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.0223, 0.3396],
#         [0.0223, 1.0000, 0.1259],
#         [0.3396, 0.1259, 1.0000]])

Evaluation

Metrics

Triplet

  • Dataset: cellxgene_pseudo_bulk_100k_multiplets_cell_type_cell_sentence_2_caption
  • Evaluated with TripletEvaluator
Metric Value
cosine_accuracy 0.7655

Training Details

Training Dataset

cellxgene_pseudo_bulk_100k_multiplets_cell_type_cell_sentence_2_caption

  • Dataset: cellxgene_pseudo_bulk_100k_multiplets_cell_type_cell_sentence_2_caption at d714546
  • Size: 81,143 training samples
  • Columns: anchor, positive, negative_1, and negative_2
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative_1 negative_2
    type string string string string
    details
    • min: 356 characters
    • mean: 385.24 characters
    • max: 450 characters
    • min: 6 characters
    • mean: 19.33 characters
    • max: 74 characters
    • min: 6 characters
    • mean: 19.12 characters
    • max: 57 characters
    • min: 354 characters
    • mean: 387.28 characters
    • max: 431 characters
  • Samples:
    anchor positive negative_1 negative_2
    TMSB4X TMSB10 ACTB MALAT1 GNLY NKG7 IFITM2 LGALS1 GZMA EEF1A1 PFN1 HMGB2 FTH1 PTMA HSP90AA1 GZMB ARHGDIB HNRNPA2B1 PLAAT4 FAU CMC1 VIM MYL12A CBX3 ATP5F1E HCST IFI44L KLRF1 H3-3A COX6C ARL6IP1 CFL1 ISG15 HMGB1 S100A4 ATP5MF RORA MYL6 CORO1A OAZ1 KLRB1 ID2 HMGN3 CCNI RBM39 CAP1 SERF2 ELOC FCER1G S100A9 IFI16 YWHAZ EIF1 CALR HMGN2 SKAP2 SLC25A5 ZZZ3 YBX1 NUCB2 CDC42 GSTP1 FTL ATP5F1D lymphocyte CD8-positive, alpha-beta T cell TMSB4X MALAT1 TMSB10 EEF1A1 ACTB PTMA PFN1 GAPDH HMGB2 HMGB1 TMA7 GNLY TUBA1B TPT1 FAU YBX1 ATP5F1E CD52 GSTP1 GZMB CORO1A CALM1 HMGN2 RACK1 MYL6 BLOC1S1 S100A6 VIM COTL1 OAZ1 HNRNPA2B1 DEK ETS1 SERF2 SRP14 NDUFS6 GZMA H2AZ1 EEF2 HINT1 UQCRH SRSF10 UBA52 CD74 ENO1 HSP90AA1 HSP90AB1 ARHGDIB COX7C ANXA1 TXN SNRPG MSN UBB COX8A POLR2L UBL5 PKM FTL LGALS1 RBM3 EIF3E CHCHD2 EIF4G2
    EEF1A1 MALAT1 FTH1 JUNB TPT1 FOS TMSB10 BTG1 TMSB4X ZFP36L2 NACA PABPC1 ACTB FAU VIM H3-3B EIF1 ZFP36 SARAF PTMA IL7R JUN RACK1 EEF2 UBA52 GAPDH FTL FXYD5 DUSP1 S100A4 CD69 CXCR4 UBC TSC22D3 CFL1 KLF6 ARHGDIB KLF2 BTG2 CITED2 IER2 TUBB4B CD3E EEF1G SLC2A3 NFKBIA PFN1 SRGN SNX9 COX4I1 DNAJB1 SERF2 CD8A PCBP2 IL32 BIRC3 SMAP2 FUS GADD45B MYL12A OAZ1 ATP5F1E TUBA4A PNRC1 effector memory CD4-positive, alpha-beta T cell naive thymus-derived CD4-positive, alpha-beta T cell MALAT1 EEF1A1 TPT1 TMSB4X ACTB TMSB10 FAU JUNB RACK1 FTH1 PTMA IL32 VIM ZFP36L2 IL7R S100A4 NACA FTL PFN1 CD52 EIF1 UBA52 EEF1G PABPC1 SARAF GAPDH SH3BGRL3 EEF2 H3-3B BTG1 TXNIP FXYD5 MYL12A SERF2 CFL1 CALM1 ARHGDIB LDHB ATP5F1E CD3E SLC2A3 NFKBIA CORO1A DDX5 HSPA8 C12orf57 COX7C COX4I1 ITM2B UBC HINT1 TOMM7 PCBP2 S100A6 HSP90AA1 MYL6 HSP90AB1 NOP53 CD69 CXCR4 HNRNPA2B1 PPDPF RAC2 PNRC1
    MALAT1 GRIK1 SYT1 PCDH9 RORA NRG1 CADPS ZFPM2 LRRC4C LINGO2 RALYL PTPRD SPHKAP CNTNAP5 SLC8A1 CCSER1 HDAC9 CELF2 R3HDM1 CNTN4 RBMS3 PCDH7 GALNT13 UNC5D ROBO1 SYNPR SNAP25 GPM6A ANK3 FRMPD4 CHRM2 RYR2 KHDRBS2 CADM1 CACNA1D RGS6 PDE4D DOCK4 UNC13C CDH18 FAT3 MEG3 NR2F2-AS1 HMCN1 GULP1 CAMK2D ZEB1 SYN2 DYNC1I1 OXR1 DPP10 OSBPL6 FRAS1 PPP3CA ZNF385D ZMAT4 PCBP3 HS6ST3 ERC2 PLEKHA5 CDK14 MAP2 NCOA1 ATP8A2 neuron fibroblast MALAT1 NRG1 ROBO1 SYT1 TENM2 LRRC4C GPC5 PCDH9 PCDH11X CDH18 PTPRD NOVA1 NKAIN3 MEG8 SLC8A1 MEG3 TAC1 ANK3 EGFEM1P HDAC9 DCC RYR2 RMST TRPM3 ADGRL2 ATP1B1 MAP2 GNAS GRIP1 ZNF804A CCSER1 PRKG1 TNRC6A CALY PAM AGBL4 INPP4B PDE4D AHI1 GALNT13 FRMD4A DOK6 GAPDH ARHGAP24 GABRB2 CNTNAP5 DPP10 HS6ST3 LINC00632 GRM1 CADPS PCDH7 JMJD1C RALYL LHFPL3 STMN2 SYNE1 MAP1B DTNA GAD2 SCN1A DST GRIA1 UNC5D
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

cellxgene_pseudo_bulk_100k_multiplets_cell_type_cell_sentence_2_caption

  • Dataset: cellxgene_pseudo_bulk_100k_multiplets_cell_type_cell_sentence_2_caption at d714546
  • Size: 9,011 evaluation samples
  • Columns: anchor, positive, negative_1, and negative_2
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative_1 negative_2
    type string string string string
    details
    • min: 347 characters
    • mean: 386.7 characters
    • max: 437 characters
    • min: 6 characters
    • mean: 18.08 characters
    • max: 74 characters
    • min: 6 characters
    • mean: 18.26 characters
    • max: 50 characters
    • min: 357 characters
    • mean: 386.18 characters
    • max: 431 characters
  • Samples:
    anchor positive negative_1 negative_2
    MALAT1 EEF1A1 FTH1 TMSB4X ACTB FTL RTN4 ATP6V0B TPT1 FAU S100A6 NDUFA4 ATP5F1E COX7C ITM2B IGFBP7 EIF1 C12orf75 CD9 COX7B SERF2 ATP1B1 COX8A TXNIP NDUFB2 MYL6 PPDPF COX6B1 UQCR11 APOE COX4I1 CALM2 UQCRB S100A11 UQCRQ COX6C ATP5MG BSG ATP6AP2 UQCR10 PTMA NACA UBL5 UBA52 TMSB10 ADGRF5 HSP90AA1 GSTP1 ATP5F1D CHCHD2 GAPDH COX7A2 SKP1 HSPE1 PRDX1 CYSTM1 LGALS3 CD63 ATP5MJ CKB NDUFS5 ATP5ME UBB MAL kidney collecting duct intercalated cell epithelial cell of proximal tubule MALAT1 EEF1A1 IGFBP7 CALB1 FTH1 COX7C NDUFA4 S100A6 ATP5F1E ATP1B1 RNASEK SERF2 TMSB4X FTL TPT1 ITM2B COX7B COX6C FAU EIF1 ATP6V0C ATP6V0B COX8A PTMA RTN4 GAPDH CYSTM1 COX4I1 UQCRB ATP5MG UQCRQ MAL NDUFB2 MYL6 CTSD NDUFA1 UQCR11 APOE CD63 SRP14 BSG UBL5 COX7A2 COX6B1 ATP5PF PPIA ACTB ATP5MC3 NDUFA13 NACA TMA7 ATP5ME UQCRH CHCHD2 LGALS3 S100A11 GSTP1 LDHB CKB NDUFS5 UBB MICOS10 ATP5MK S100A10
    MALAT1 KCND2 NRXN1 CDH18 NRXN3 ZNF385D CADM2 RALYL NKAIN2 CADPS2 RIMS1 FSTL5 GRID2 TRPM3 CHN2 DPP6 JMJD1C RORA PDE1A UNC13C TIAM1 NRG1 SNAP25 ZFPM2 CALN1 LSAMP CNTN1 ABLIM1 SYNE1 ANK3 CA10 NFIA ZBTB20 NTM CADM1 OPCML RELN DNM3 NEBL ERC1 SCN2A PPP3CA CACNA1A GALNT13 LRRC4C GPM6A RABGAP1L RIT2 CAMK4 GRIA4 PTPRD RBFOX3 MCTP1 LHFPL6 PCLO MEG3 PDE10A NOVA1 RTN1 ZNF385B CNTN4 GABRB2 SPOCK1 OXR1 neuron fibroblast MALAT1 KCND2 NRXN1 ZNF385D NRXN3 CDH18 CADM2 RALYL CADPS2 GRID2 RIMS1 FSTL5 DPP6 NRG1 JMJD1C RORA TRPM3 CHN2 TIAM1 ZFPM2 LSAMP CNTN1 PDE1A UNC13C ZBTB20 NKAIN2 DNM3 GALNT13 GRIA4 CA10 NTM CADM1 ABLIM1 SYNE1 SNAP25 PPP3CA ANK3 CAMK4 PTPRD CALN1 LHFPL6 SGCZ WWOX MEG3 EPHA6 ERC1 PLCB4 SCN2A CACNA1A CNTN4 GPM6A RABGAP1L RIT2 SPOCK1 NFIA PCLO RELN RYR2 NEBL RUNX1T1 SH3GL2 PDE10A PATJ NOVA1
    EEF1A1 ACTB GAPDH HMGN2 PTMA SERF2 TMSB4X CD74 PABPC1 FTH1 TMSB10 FAU PFN1 HMGN1 OAZ1 HMGB1 TPT1 PPIA NACA BTF3 MALAT1 MYL6 ATP5MG CFL1 RACK1 ODC1 ATP5F1E TMA7 SLC25A5 ELOB ARPC3 NPM1 COX7C ANP32B C4orf3 EIF1 PCBP2 KLF6 LAPTM5 COX8A RHOA HSPA8 H3-3B PTP4A2 UBA52 OST4 CIRBP LGALS1 EIF3L STMN1 PPDPF COX4I1 RAN EIF3F PPP1CC COMMD6 NDUFA4 YBX1 PEBP1 COTL1 COX7A2 HSPE1 CCNI TRIR centroblast tonsil germinal center B cell EEF1A1 CD74 ACTB TPT1 GAPDH SERF2 TMSB4X MALAT1 OAZ1 ATP5MG FTL EEF2 FAU LAPTM5 FTH1 PFN1 BTF3 EIF1 PTMA PPIA RACK1 TMSB10 CCNI COX4I1 C4orf3 HMGB1 NACA HMGN1 UBA52 PABPC1 MYL6 ATP5F1E SEC14L1 BTG1 ATP5MC2 ARPC2 YWHAZ CFL1 NPM1 COMMD6 PCBP2 OST4 SLC25A5 CSDE1 MEF2C EZR EIF3L RBM3 CORO1A UBE2I METAP2 ARPC3 C12orf57 MOB4 PARK7 COX6B1 RAN H3-3B LCP1 SRP14 SH3BGRL3 SNRPG EIF3H SAP18
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • warmup_ratio: 0.1
  • bf16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss cellxgene pseudo bulk 100k multiplets cell type cell sentence 2 caption loss cellxgene_pseudo_bulk_100k_multiplets_cell_type_cell_sentence_2_caption_cosine_accuracy
0.3155 100 6.4203 15.1236 0.7166
0.6309 200 4.8279 17.3417 0.6353
0.9464 300 5.6302 15.3011 0.5781
1.2618 400 5.1935 12.3394 0.6423
1.5773 500 4.8741 11.7850 0.7072
1.8927 600 4.6635 11.4664 0.7145
2.2082 700 4.3177 12.4533 0.7556
2.5237 800 3.9556 10.7686 0.8235
2.8391 900 3.7546 9.9743 0.7465
3.1546 1000 4.4209 7.8947 0.7440
3.4700 1100 4.1724 8.3401 0.7585
3.7855 1200 4.0596 8.7027 0.7655

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 5.0.0
  • Transformers: 4.55.0.dev0
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.9.0
  • Datasets: 2.19.1
  • Tokenizers: 0.21.4

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}