Overfitting
A situation where a model learns the training data too closely and performs poorly on new data.
Overfitting occurs when a model learns not only the meaningful patterns in the training data, but also the noise, accidental details, and dataset-specific quirks. In that case, the model may look extremely successful on training data while underperforming on new inputs. Put simply, it appears to have learned, but has actually memorized. Overfitting becomes more common when data is limited, model capacity is too high, or evaluation is poorly designed. To reduce this problem, practitioners use regularization, proper data splitting, data augmentation, and cross-validation. In real-world AI, reliable models are built not by chasing high scores alone, but by controlling generalization carefully.
You Might Also Like
Explore these concepts to continue your artificial intelligence journey.
