# Phi-4 + Phi-4-mini: Microsoft's Synthetic-Curriculum Model — Why Fragile in TR?

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-phi-4-phi-4-mini-synthetic-curriculum
> Updated: 2026-05-14T14:42:51.775Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part III — Small Open Models (1B–8B)
**TLDR:** Phi-4 14B + Phi-4-mini 3.8B — Microsoft's 'textbook quality' synthetic-data-trained models. Strong in math + code, weak in general TR. Why? Synthetic data heavily English. Phi-4 QLoRA Lab on RTX 4090 + where it shines (math reasoning, code completion).

