# EXL2 (ExLlamaV2): Variable Bitrate Quantization — Which Layer at Which Bit?

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-exl2-variable-bitrate-quantization
> Updated: 2026-05-14T14:42:57.284Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part X — Quantization Engineering
**TLDR:** EXL2 — ExLlamaV2's native format. Different bit-width per layer; sensitive layers get more bits. Measure layer sensitivity via calibration, optimal allocation within budget. Fastest LLM inference for single-user on RTX 4090 (1.5-2x vs vLLM at batch=1).

