Technical GlossaryDeep Learning
Encoder-Decoder RNN
A classical sequential architecture that compresses an input sequence into context and generates an output sequence from it.
The encoder-decoder RNN architecture was the foundational design for seq2seq tasks before attention became dominant. The encoder compresses the input sequence into a contextual representation, and the decoder generates the output sequence from that summary. While this design marked a major breakthrough in sequence transformation tasks, it struggled with context bottlenecks on long sequences. The rise of attention mechanisms was in large part a response to this limitation.
You Might Also Like
Explore these concepts to continue your artificial intelligence journey.
