Technical GlossaryGenerative AI and LLM
Prefix Tuning
A PEFT technique that steers the model’s internal attention behavior through small learnable prefix representations.
Prefix tuning learns internal context-like vectors to guide task behavior without changing most of the model weights. It is an interesting alternative in scenarios where parameter efficiency is important. It can be seen as one of the approaches that bridges prompting and fine-tuning.
You Might Also Like
Explore these concepts to continue your artificial intelligence journey.
