Skip to content
Technical GlossaryDeep Learning

Checkpointed Backpropagation

A training technique that reduces memory usage by not storing all intermediate activations and recomputing them when needed.

Checkpointed backpropagation is used to overcome memory limits when training very deep or very large models. Instead of storing activations from every layer, only selected checkpoints are kept, and missing intermediate values are recomputed during backpropagation. This may increase compute time but can significantly reduce memory cost. It represents one of the classic speed-memory trade-offs in large-scale model training.