# Sequence Packing & Variable-Length Attention: The Trick That Boosts Throughput by 40%

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-sequence-packing-varlen-attention
> Updated: 2026-05-14T14:42:50.892Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part II — Tokenizer & Data Engineering
**TLDR:** Padding tokens are wasted compute. Packing: concat multiple short examples into one sequence. Variable-length attention (flash_attn_varlen_func) with block-diagonal mask. TRL SFTTrainer packing=True internals, cu_seqlens tensor anatomy, throughput bench.

