# FP8 Training: H100 Native, Premature on RTX 4090 — Transformer Engine Internals

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-fp8-training-h100-rtx4090-prematur
> Updated: 2026-05-14T14:42:57.372Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part X — Quantization Engineering
**TLDR:** FP8 = the future of AI compute. H100 native (FP8 Tensor Cores + WGMMA + Transformer Engine). RTX 4090 (Ada) supports FP8 GEMM but ecosystem unripe — fallbacks common, training pipeline buggy. Cookbook rule: bf16 training on 4090, FP8 inference (vLLM). FP8 training on H100 detailed in Part XIII.

