# Early Stopping

> Source: https://sukruyusufkaya.com/en/glossary/early-stopping
> Updated: 2026-05-13T21:45:00.791Z
> Type: glossary
> Category: derin-ogrenme
**TLDR:** A strategy that prevents overfitting by stopping training when validation performance begins to deteriorate.

<p>Early stopping is one of the most practical and effective forms of regularization. It terminates training when validation performance begins to worsen even if training loss continues to decrease. This can prevent unnecessary memorization and preserve the model at its best generalization point. It is particularly important in deep networks for controlling both training time and overfitting.</p>