# LoRA + QLoRA: Parameter-Efficient Fine-Tuning Revolution — From Hu 2021 to Dettmers 2023

> Source: https://sukruyusufkaya.com/en/learn/llm-muhendisligi/lora-qlora-parameter-efficient-finetuning
> Updated: 2026-05-13T13:00:29.477Z
> Category: LLM Mühendisliği
> Module: Module 14: Fine-tuning — SFT, LoRA, QLoRA
**TLDR:** LoRA (Hu 2021): low-rank decomposition fine-tuning — base weights frozen, train only small adapter. %1 parameters, %95+ quality preservation. QLoRA (Dettmers 2023): 4-bit base + LoRA, fine-tune 70B model on consumer GPU. NF4 quantization, paged optimizer. Turkish practical: $5K cost production Turkish Llama-3 70B.

