🧠 Synapse-Base: The Hybrid Chess Foundation (v3.0)

Synapse Banner

Model Series: Synapse Parameters: 38.1M Architecture: CNN--Transformer Deployment: WASM Ready


📖 Model Details

Synapse-Base is a foundation-scale neural chess evaluator. It is the first model in the GambitFlow project to utilize a Hybrid CNN-Transformer architecture. By merging spatial perception (CNN) with long-range strategic reasoning (Transformer), Synapse-Base identifies complex patterns like pins, skewers, and multi-move tactical sequences that traditional CNNs often overlook.

  • Developer: GambitFlow / Rafsan1711
  • Model Type: Evaluation & Move Prediction Foundation Model
  • License: CC BY-NC 4.0
  • Parent Series: Synapse Ecosystem

🚀 Performance Snapshot

Metric Value Test Set
MSE 0.0452 GambitFlow-Elite (v2.0)
Top-1 Move Match 81.2% Master-level positions (>2000 ELO)
Inference Speed ~42ms/pos T4 GPU (Half Precision)

🏗️ Technical Specifications

Hybrid Architecture

Synapse-Base employs a "Residual Neck" design:

  1. Residual Backbone (CNN): 20 layers of ResNet blocks for immediate board feature extraction.
  2. Attention Neck (Transformer): 4 layers of Multi-Head Self-Attention (8 heads) to model relationships between distant squares (e.g., a Bishop pinning a Queen from across the board).
  3. Dual Heads:
    • Value Head: Predicting the winning probability [-1.0 to 1.0].
    • Policy Head: Pre-trained weights for move suggestion (4096-logit distribution).

💾 Input Representation: The 119 Channels

To prevent "tactical blindness," the model consumes a massive 119-layer bitboard:

  • 0-11: Piece placement (6 white, 6 black types).
  • 12-26: Board metadata (Turn, Castling, EP, Check status).
  • 27-50: Tactical Vision Maps (Attack and Defense heatmaps).
  • 51-66: Coordinate Masking (Rank/File identifiers).
  • 67-118: Static Positional Biases (Pre-trained PST heuristics).

📊 Training Data

The model was trained on the GambitFlow Elite Database:

  • Source: 5,000,000+ filtered Lichess games.
  • Criteria: Both players must be rated 2000 ELO or higher.
  • Goal: To ensure the model learns professional strategy and avoids amateur blunders.

💻 How to Get Started

Quick Inference (JavaScript)

import * as ort from 'onnxruntime-web';

// Load the session
const session = await ort.InferenceSession.create('./synapse_base.onnx');

// Prepare 119-channel input (Float32Array)
const input = new Float32Array(1 * 119 * 8 * 8); 
const tensor = new ort.Tensor('float32', input, [1, 119, 8, 8]);

// Run
const { value } = await session.run({ board_state: tensor });
console.log("Evaluation:", value.data[0]);

Weights for Fine-Tuning (PyTorch)

The synapse_base.pth file contains the full state dictionary for further training.

model = SynapseBase(num_residual_blocks=20, num_filters=256)
model.load_state_dict(torch.load('synapse_base.pth'))

📝 Release Checklist

  • Pre-trained on 5M+ Master positions.
  • 119-Channel density validation (No empty layers).
  • ONNX Opset 17 export validation.
  • README metadata compliant with HF Hub spec.
  • Verified compatibility with 18GB CPU HF Spaces.

⚠️ Limitations

  • Endgame Precision: May struggle with precise 7-piece tablebase draws.
  • Commercial Use: Restricted under CC BY-NC 4.0.

Part of the GambitFlow Project. Flow with the perfect move. ♟️

Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading

Dataset used to train GambitFlow/Synapse-Base

Evaluation results