# DPO Math: Bradley-Terry → Loss Function Derivation — Why No Reward Model?

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-dpo-mathematical-derivation
> Updated: 2026-05-14T14:42:57.905Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part XI — Alignment & Preference Optimization
**TLDR:** DPO (Rafailov et al. 2023) — mathematical equivalent of RLHF, but SINGLE-stage. Bradley-Terry preference model → KL-constrained RL objective → closed-form policy gradient → SFT-like loss. β hyperparameter's effect on gradient, DPO TRL DPOTrainer Lab on RTX 4090.

