# Mixed Precision Architecture: bf16 vs fp16 vs fp8 — Why Pure bf16 on RTX 4090?

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-mixed-precision-bf16-fp16-fp8-rtx4090
> Updated: 2026-05-14T14:42:49.729Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part I — Hardware & Memory Engineering
**TLDR:** fp16's loss scaling complexity, bf16's 'master fp32' pattern, fp8 (Ada supports but H100 is native), TF32 matmul precision flag, autocast nuances — cookbook's clear choice of pure bf16 for RTX 4090. NaN cost and training stability math.

