Technical GlossaryGenerative AI and LLM
LoRA
A popular PEFT method that enables efficient fine-tuning by representing weight updates with low-rank matrices.
LoRA is one of the most widely used PEFT techniques for adapting large models efficiently. Instead of fully updating the base weights, it learns low-rank additive matrices. This reduces training cost while preserving strong task adaptation flexibility. It has effectively become a standard tool in modern LLM customization.
You Might Also Like
Explore these concepts to continue your artificial intelligence journey.
