# The Anatomy of GPU Memory Budgeting: W + G + O + A + B — Managing the 24GB on RTX 4090 at the Atom Level

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-gpu-memory-budgeting-w-g-o-a-b
> Updated: 2026-05-14T14:42:49.459Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part I — Hardware & Memory Engineering
**TLDR:** The most common phrase in fine-tuning: 'OOM'. This lesson ends random OOMs forever. Break down the Weights/Grads/Optimizer/Activations/Buffers budget; understand mathematically why AdamW needs 8 bytes/param, Lion 4, and NF4 fits at 0.5. Fit Llama 3.1 8B into 24GB with 4 different methods.

