Skip to content

LLaVA-1.5 / 1.6 / OneVision: 2-Stage Training + Projector Pretrain + Instruction Tune

LLaVA's classic 2-stage training recipe: (1) Projector-only pretrain on 558K image-caption pairs, (2) end-to-end instruction tune. Freeze strategy ablation (vision frozen vs unfrozen, LLM frozen vs unfrozen). LLaVA-1.6 Mistral 7B FT on RTX 4090.

Şükrü Yusuf KAYA
32 min read
Advanced
LLaVA-1.5 / 1.6 / OneVision: 2-Stage Training + Projector Pretrain + Instruction Tune

1. LLaVA 2-Stage Training#

Stage 1: Projector Pretrain - Frozen: Vision encoder, LLM - Trainable: Projector (MLP) only - Data: 558K LAION-CC-SBU image-caption pairs - Format: <image>{caption} - Süre: ~12 saat 8×A100 80GB - Amaç: Image embeddings'i LLM embedding space'iyle align et Stage 2: Visual Instruction Tune - Frozen: Vision encoder (genelde) - Trainable: LLM (full or LoRA) + Projector - Data: 150K-665K visual instruction pairs (LLaVA-Instruct + custom) - Format: <image>\nUser: question\nAssistant: answer - Süre: ~10 saat 8×A100 80GB (Vicuna 13B) - Amaç: Multimodal instruction following

2. Freeze Strategy Ablation (Llava-1.6 Mistral 7B, RTX 4090 QLoRA)#

ConfigTrainable paramsMM-Bench accuracyWall-clock
Frozen vision + frozen LLM + train projector~7M38.26h
Frozen vision + LoRA LLM + train projector64M56.88h
Unfrozen vision + LoRA LLM + projector124M58.410h
Full FT (vision + LLM + projector)7.5B60.1needs cloud
Karar: RTX 4090 baseline → frozen vision + LoRA LLM + projector (cost-effective + iyi kalite).
✅ Teslim
  1. LLaVA-1.6 Mistral 7B'yi mini visual instruction dataset üzerinde FT et. 2) Frozen vs unfrozen vision ablation. 3) Sonraki ders: 6.3 — Llama 3.2 Vision 11B/90B.

Yorumlar & Soru-Cevap

(0)
Yorum yazmak için giriş yap.
Yorumlar yükleniyor...

Related Content

LLaVA Fine-Tuning: 2-Stage Recipe + Freeze Strategy Ablation | Fine-Tuning Cookbook (Model-by-Model)