# Sparse Upcycling: Converting Dense Model to MoE — Qwen2-MoE Technique Reconstruction

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-sparse-upcycling-dense-to-moe
> Updated: 2026-05-14T14:42:53.557Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part V — MoE Internals & Fine-Tuning
**TLDR:** Sparse Upcycling (Komatsuzaki et al. 2022) — convert dense pre-trained model to MoE then continue pre-training to specialize. Copy existing FFN N times, add router, continue training. Cheaper than scratch pre-train. Qwen 2.5 7B → 7B-MoE (8 expert) conversion lab on RTX 4090.

