# Qwen 2.5 / Qwen3 1.5B/3B/7B — Multilingual Champion (Turkish Token Efficiency)

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-qwen2.5-qwen3-7b-multilingual-turkish
> Updated: 2026-05-14T14:42:51.420Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part III — Small Open Models (1B–8B)
**TLDR:** Qwen 2.5 / Qwen3 — Alibaba's open-weight family. 151K vocab (TR-friendly), Apache 2.0, easier than Llama for FT. Qwen2.5-7B QLoRA on RTX 4090: 1 epoch ~40 min. TR-MMLU baseline 38.1 → fine-tune 44.2 (+16%). Qwen3 14B + YaRN.

