# Llama 3.3 70B QLoRA + FSDP: 8×H100 SXM Recipe (5.6h 1 Epoch)

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-llama-3.3-70b-qlora-fsdp-recipe
> Updated: 2026-05-14T14:42:52.752Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part IV — Mid-Large Models (13B-70B+) + Distributed Internals
**TLDR:** Full Lab recipe for Llama 3.3 70B-Instruct: 8×H100 SXM cloud (Lambda $24/h), QLoRA NF4 + FSDP FULL_SHARD, bitsandbytes 4-bit, gradient checkpointing, paged AdamW. 50K TR Alpaca 1 epoch in 5.6h. TR-MMLU base 55.4 → 60.8.

