|  | |
| # II-Search-4B | |
| <aside> | |
| A 4B parameter language model specialized in information seeking, multi-hop reasoning, and web-integrated search, achieving state-of-the-art performance among models of similar size. | |
| </aside> | |
| ## Model Description | |
| II-Search-4B is a 4B parameter language model based on Qwen3-4B, fine-tuned specifically for information seeking tasks and web-integrated reasoning. It excels at complex multi-hop information retrieval, fact verification, and comprehensive report generation. | |
| ### Key Features | |
| - Enhanced tool usage for web search and webpage visits | |
| - Multi-hop reasoning capabilities with sophisticated planning | |
| - Verified information retrieval with cross-checking | |
| - Strong performance on factual QA benchmarks | |
| - Comprehensive report generation for research queries | |
| ## Training Methodology | |
| Our training process consisted of three key phases: | |
| ### Phase 1: Tool Call Ability Stimulation | |
| We used a distillation approach from larger models (Qwen3-235B) to generate reasoning paths with function calling on multi-hop datasets. This established the base capabilities for tool use. | |
| ### Phase 2: Reasoning Improvement | |
| We addressed initial limitations by: | |
| - Creating synthetic problems requiring more reasoning turns, inspired by Random Walk algorithm | |
| - Improving reasoning thought patterns for more efficient and cleaner reasoning paths | |
| ### Phase 3: Rejection Sampling & Report Generation | |
| We applied: | |
| - Filtering to keep only high-quality reasoning traces (correct answers with proper reasoning) | |
| - STORM-inspired techniques to enhance comprehensive report generation | |
| ## Performance | |
| | **Benchmark** | **Qwen3-4B** | **Jan-4B** | **WebSailor-3B** | **II-Search-4B** | | |
| | --- | --- | --- | --- | --- | | |
| | OpenAI/SimpleQA | 76.8 | 80.1 | 81.8 | 91.8 | | |
| | Google/Frames | 30.7 | 24.8 | 34.0 | 67.5 | | |
| | Seal_0 | 6.31 | 2.7 | 1.8 | 22.5 | | |
| ### Tool Usage Comparison | |
| **Simple QA (SerpDev)** | |
| | | **Qwen3-4B** | **Jan-4B** | **WebSailor-3B** | **II-Search-4B** | | |
| | --- | --- | --- | --- | --- | | |
| | # Search | 1.0 | 0.9 | 2.1 | 2.2 | | |
| | # Visit | 0.1 | 1.9 | 6.4 | 3.5 | | |
| | # Total Tools | 1.1 | 2.8 | 8.5 | 5.7 | | |
| All benchmark traces from models can be found at: https://huggingface.co/datasets/II-Vietnam/Inspect-Search-Models-Benchmarking-Result | |
| ## Intended Use | |
| II-Search-4B is designed for: | |
| - Information seeking and factual question answering | |
| - Research assistance and comprehensive report generation | |
| - Fact verification and evidence-based reasoning | |
| - Educational and research applications requiring factual accuracy | |
| ## Usage | |
| - LMStudio | |
| ### Recommended Generation Parameters | |
| ```python | |
| generate_cfg = { | |
| 'thought_in_content': True, | |
| 'top_p': 0.95, | |
| 'temperature': 0.6, | |
| 'repetition_penalty': 1.1, | |
| 'max_tokens': 2048 | |
| } | |
| ``` | |
| - For a query that you need to find a short and accurate answer. Add the following phrase: "\n\nPlease reason step-by-step and put the final answer within \\boxed{}." | |
| ## Citation | |
| ``` | |
| @misc{II-Search-4B, | |
| author = {Intelligent Internet}, | |
| title = {II-Search-4B: Information Seeking and Web-Integrated Reasoning LLM}, | |
| year = {2025}, | |
| publisher = {Hugging Face}, | |
| journal = {Hugging Face Hub}, | |
| howpublished = {\url{https://huggingface.co/II-Vietnam/II-Search-4B}}, | |
| } | |
| ``` |