# GPTQ Algorithm: Optimal Brain Quantization + Hessian Update — Llama 8B in 12 Min on RTX 4090

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-gptq-algorithm-optimal-brain-quantization
> Updated: 2026-05-14T14:42:57.015Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part X — Quantization Engineering
**TLDR:** GPTQ (Frantar et al. 2022) — LLM weight quantization standard. Optimal Brain Quantization theory (LeCun 1990), Hessian inverse update, error compensation, group quantization. Quantize Llama 3.1 8B in 12 min on RTX 4090. WikiText-2 perplexity delta < 2%.

