# torch.compile + Inductor: Reduce-Overhead + Dynamic Shapes + Recompile Watcher

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-torch-compile-inductor
> Updated: 2026-05-14T14:42:59.737Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part XIII — Custom Kernels & Performance Surgery
**TLDR:** PyTorch 2.x's flagship feature: torch.compile. Inductor backend (Triton kernel generation), 3 modes (default, reduce-overhead, max-autotune), dynamic shapes (recompile watcher), CUDA graphs, integration into FT training pipeline. Llama 3.1 8B FT throughput +15% on RTX 4090.

