File size: 3,001 Bytes
			
			6e4d7f6 e6199b0 6e4d7f6 ced168c e2c9bc8 ced168c 9c7c371 ced168c 9c7c371 ced168c 1d1e5fa ced168c 9c7c371 ced168c 9c7c371 8ec353c 1d1e5fa ced168c 9c7c371 1413e3d 9c7c371 1040520 ced168c aeba0e2 9c7c371 aeba0e2 ef40738 aeba0e2 c26e036 aeba0e2 dd36c32 ced168c 65f37cc ced168c 9c7c371 ced168c f2b4394 ced168c  | 
								1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81  | 
								---
license: apache-2.0
language:
- en
base_model:
- Menlo/Jan-edge
pipeline_tag: text-generation
library_name: transformers
---
# Jan-v1-edge: Distilled for Edge, Built for Web Search
[](https://github.com/menloresearch/deep-research)
[](https://opensource.org/licenses/Apache-2.0)
[](https://jan.ai/)
## Overview
**Jan-v1-edge** is a lightweight agentic model built for fast, reliable on-device execution. As the second release in the **Jan Family**, it is distilled from the larger [`Jan-v1`](https://huggingface.co/janhq/Jan-v1-4B) model, preserving strong reasoning and problem-solving ability in a smaller footprint suitable for resource-constrained environments.
Jan-v1-edge was developed through a two-phase post-training process. The first phase, **Supervised Fine-Tuning (SFT)**, transferred core capabilities from the `Jan-v1` teacher model to the smaller student. The second phase, **Reinforcement Learning with Verifiable Rewards (RLVR)** —the same method used in `Jan-v1` and `Lucy`—further optimized reasoning efficiency, tool use, and correctness. This staged approach delivers reliable results on complex, interactive workloads.
## Performance
### Question Answering(SimpleQA)
Despite having only 1.7B parameters, **Jan-v1-edge** achieves 83% accuracy—nearly matching the larger Jan-nano-128k—demonstrating its efficiency and robustness.

### Chat & Instruction Following

Versus Qwen 3 1.7B Thinking, Jan-v1-edge shows a slight degradation on instruction-following and CreativeWriting, while remaining comparable or better on EQBench and recency QA.
## Quick Start
### Integration with Jan App
Jan-v1-edge is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities.
### Local Deployment
**Using vLLM:**
```bash
vllm serve janhq/Jan-v1-edge \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes
    
```
**Using llama.cpp:**
```bash
llama-server --model Jan-v1-edge-Q8_0.gguf \
    --host 0.0.0.0 \
    --port 1234 \
    --jinja \
    --no-context-shift
```
### Recommended Inference Parameters
```yaml
temperature: 0.6
top_p: 0.95
top_k: 20
min_p: 0.0
max_tokens: 2048
```
## 🤝 Community & Support
-   **Discussions**: [HuggingFace Community](https://huggingface.co/janhq/Jan-v1-edge/discussions)
-   **Jan App**: Discover more about the Jan App at [jan.ai](https://jan.ai/)
## 📄 Citation
```bibtex
Updated Soon
``` |