# Pre-training Pipeline End-to-End: Corpus → Tokenize → Pack → Train — Llama-3 Production Recipe

> Source: https://sukruyusufkaya.com/en/learn/llm-muhendisligi/pretraining-pipeline-corpus-tokenize-pack-train
> Updated: 2026-05-13T13:00:28.616Z
> Category: LLM Mühendisliği
> Module: Module 11: Pre-training Dynamics + Optimizer Math
**TLDR:** All stages of pre-training pipeline: corpus collection (Common Crawl, Wikipedia, code), data cleaning (deduplication, language filtering, quality scoring), tokenization batching, sequence packing strategy, document boundary handling. Llama-3 production recipe: 15T tokens, 24K H100 days compute, 70 days training.

