# LoRA

> Source: https://sukruyusufkaya.com/en/glossary/lora
> Updated: 2026-05-13T20:01:10.216Z
> Type: glossary
> Category: uretken-yapay-zeka-ve-llm
**TLDR:** A popular PEFT method that enables efficient fine-tuning by representing weight updates with low-rank matrices.

<p>LoRA is one of the most widely used PEFT techniques for adapting large models efficiently. Instead of fully updating the base weights, it learns low-rank additive matrices. This reduces training cost while preserving strong task adaptation flexibility. It has effectively become a standard tool in modern LLM customization.</p>