Technical GlossaryDeep Learning
ReLU Activation
The most common modern activation function, which zeros negative inputs and leaves positive inputs linear.
ReLU is widely regarded as one of the most effective practical discoveries in modern deep learning. It is computationally simple, derivative-efficient, and helps training in deep networks. It has become the standard choice especially in CNNs and fully connected architectures. However, the so-called dying ReLU problem, where some neurons remain permanently in the negative region and stop learning, must be monitored carefully.
You Might Also Like
Explore these concepts to continue your artificial intelligence journey.
