--- license: apache-2.0 datasets: - Magpie-Align/Magpie-Pro-300K-Filtered - mlabonne/FineTome-100k - unsloth/OpenMathReasoning-mini - prithivMLmods/Grade-Math-18K language: - en base_model: - Qwen/Qwen3-0.6B pipeline_tag: text-generation library_name: transformers tags: - text-generation-inference - math - code - moe --- ![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/s65vynroXyhAS6Y_3nLxE.png) # Magpie-Qwen-CortexDual-0.6B > **Magpie-Qwen-CortexDual-0.6B** is a specialized, general-purpose model designed for **math**, **code**, and **structured reasoning**. Built with **CortexDual thinking mode**, it dynamically adapts to the complexity of a problem, automatically shifting into a stepwise reasoning mode for intricate logic or math tasks. This 0.6B parameter model leverages **80% of the Magpie Pro 330k dataset** and a modular blend of datasets for general-purpose proficiency and domain versatility. > \[!note] > GGUF : [https://huggingface.co/prithivMLmods/Magpie-Qwen-CortexDual-0.6B-GGUF](https://huggingface.co/prithivMLmods/Magpie-Qwen-CortexDual-0.6B-GGUF) --- ## Key Features 1. **Adaptive Reasoning via CortexDual** Automatically switches into a deeper thinking mode for complex problems, simulating trace-style deduction for higher-order tasks in math and code. 2. **Efficient and Compact** At 0.6B parameters, it is optimized for deployment in constrained environments while retaining high fidelity in logic, computation, and structural formatting. 3. **Magpie-Driven Data Synthesis** Trained using 80% of **Magpie Pro 330k**—a high-quality alignment and reasoning dataset—complemented with curated modular datasets for enhanced general-purpose capabilities. 4. **Mathematical Precision** Fine-tuned for arithmetic, algebra, calculus, and symbolic logic; ideal for STEM learning platforms, math solvers, and step-by-step tutoring. 5. **Lightweight Code Assistance** Understands and generates code in Python, JavaScript, and other common languages with contextual accuracy and explanation support. 6. **Structured Output Generation** Specializes in Markdown, JSON, and table outputs, suitable for technical documentation, instruction generation, and structured reasoning. 7. **Multilingual Competence** Supports over 20 languages with reasoning and translation support, expanding its reach for global educational and development use. --- ## Quickstart with Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Magpie-Qwen-CortexDual-0.6B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Write a Python function to check if a number is prime. Explain each step." messages = [ {"role": "system", "content": "You are an AI tutor skilled in both math and code."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` --- ## Demo Inference > [!warning] non-thinking (direct, reactive, retrieval-based responses) ![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/i6be8JWvXZMA1Zu14yqMR.png) > [!warning] thinking (reasoning, planning, deeper analysis) ![3.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/c1vy029GVOo2PUBA_XfR6.png) ![4.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/vDlsd1UDN_I0jiS_uwd7X.png) --- ## Intended Use * General-purpose problem solving in math, logic, and code * Interactive STEM tutoring and reasoning explanation * Compact assistant for technical documentation and structured data tasks * Multilingual applications with a focus on accurate technical reasoning * Efficient offline deployment on low-resource devices --- ## Limitations * Lower creativity and open-domain generation due to reasoning-focused tuning * Limited context window size due to compact model size * May produce simplified logic paths in highly abstract domains * Trade-offs in diversity and expressiveness compared to larger instruction-tuned models --- ## References 1. [Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing](https://arxiv.org/pdf/2406.08464) 2. [Qwen2.5 Technical Report](https://arxiv.org/pdf/2412.15115) 3. [YaRN: Efficient Context Window Extension of Large Language Models](https://arxiv.org/pdf/2309.00071)