# 3D Parallelism: Tensor + Pipeline + Data Parallel — Training Llama-3 70B and 405B

> Source: https://sukruyusufkaya.com/en/learn/llm-muhendisligi/3d-parallelism-tensor-pipeline-data-llama-3-70b
> Updated: 2026-05-13T13:00:29.310Z
> Category: LLM Mühendisliği
> Module: Module 13: Distributed Training — Multi-GPU/Multi-Node
**TLDR:** Frontier LLM training: Megatron-LM's 3D parallelism. Tensor Parallelism (Shoeybi 2019) — matrix splits across GPUs. Pipeline Parallelism (Huang 2018) — layer splits + bubble optimization. Combined 3D: DP × TP × PP. Llama-3 70B (DP=192, TP=8, PP=16). Communication patterns, optimization, capstone implementation outline.

