# Gisting and Soft-Prompt Tuning: Compressing Prompts into Embedding Vectors

> Source: https://sukruyusufkaya.com/en/learn/token-ekonomisi/gisting-soft-prompt-tuning-embedding-sikistirma
> Updated: 2026-05-14T14:44:13.302Z
> Category: Token Ekonomisi & LLM Cost Optimization
> Module: Module 6: Prompt Compression
**TLDR:** While LLMLingua compresses 60-90%, gisting goes down to 1/100. The logic: representing prompts as dense embedding vectors instead of token sequences. This lesson covers gisting, soft prompt tuning, and the limits of realism.

