# Qwen3 14B / 32B Base + YaRN: Long-Context FT (32K → 128K) Marginally Feasible on RTX 4090

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-qwen3-14b-32b-yarn-long-context
> Updated: 2026-05-14T14:42:51.508Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part III — Small Open Models (1B–8B)
**TLDR:** QLoRA FT of Qwen3 14B with 32K context on RTX 4090 — peak 21 GB, marginal fit. YaRN rope-scaling math, long-context SFT dataset (NIAH + RULER), where 32B is impossible on 4090. Cloud 1×H100 80GB alternative.

