# Qwen2.5-Coder 7B/14B/32B: Repo-Level Context + FIM Native FT

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-qwen-coder-repo-level-context
> Updated: 2026-05-14T14:42:55.406Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part VIII — Code Models & Repo-Level FT
**TLDR:** Qwen2.5-Coder family — 2025's strongest open code LLM. FIM native, 128K context, optimized for repo-level. 32B HumanEval 92.7%, SWE-Bench-Lite 31.6%. 7B QLoRA on RTX 4090 in 40 min; 32B on cloud H100 80GB single-GPU.

