Jan-v1-edge: Distilled for Edge, Built for Web Search

GitHub License Jan App

Overview

Jan-v1-edge is a lightweight agentic model built for fast, reliable on-device execution. As the second release in the Jan Family, it is distilled from the larger Jan-v1 model, preserving strong reasoning and problem-solving ability in a smaller footprint suitable for resource-constrained environments.

Jan-v1-edge was developed through a two-phase post-training process. The first phase, Supervised Fine-Tuning (SFT), transferred core capabilities from the Jan-v1 teacher model to the smaller student. The second phase, Reinforcement Learning with Verifiable Rewards (RLVR) —the same method used in Jan-v1 and Lucy—further optimized reasoning efficiency, tool use, and correctness. This staged approach delivers reliable results on complex, interactive workloads.

Performance

Question Answering(SimpleQA)

Despite having only 1.7B parameters, Jan-v1-edge achieves 83% accuracy—nearly matching the larger Jan-nano-128k—demonstrating its efficiency and robustness.

image/png

Chat & Instruction Following

image/png

Versus Qwen 3 1.7B Thinking, Jan-v1-edge shows a slight degradation on instruction-following and CreativeWriting, while remaining comparable or better on EQBench and recency QA.

Quick Start

Integration with Jan App

Jan-v1-edge is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities.

Local Deployment

Using vLLM:

vllm serve janhq/Jan-v1-edge \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes
    

Using llama.cpp:

llama-server --model Jan-v1-edge-Q8_0.gguf \
    --host 0.0.0.0 \
    --port 1234 \
    --jinja \
    --no-context-shift

Recommended Inference Parameters

temperature: 0.6
top_p: 0.95
top_k: 20
min_p: 0.0
max_tokens: 2048

🤝 Community & Support

📄 Citation

Updated Soon
Downloads last month
157
Safetensors
Model size
1.72B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for janhq/Jan-v1-edge

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(269)
this model
Finetunes
1 model
Quantizations
6 models

Collection including janhq/Jan-v1-edge