# Chinchilla Scaling Laws (2022): Hoffmann et al. — 1:1 Param:Data Revolution

> Source: https://sukruyusufkaya.com/en/learn/llm-muhendisligi/chinchilla-scaling-laws-hoffmann-2022
> Updated: 2026-05-13T13:00:28.967Z
> Category: LLM Mühendisliği
> Module: Module 12: Scaling Laws — Growth Laws of LLMs
**TLDR:** Hoffmann et al. 2022 'Training Compute-Optimal LLMs' paper — corrected Kaplan. Kaplan undertrained models bias. Chinchilla recipe: N ≈ D (1:1 ratio). 70B Chinchilla model > 280B Gopher (Hoffmann). Llama-3 Chinchilla-aware. New compute-optimal formula, post-Chinchilla overtraining trend.

