diff --git a/_posts/2018-06-27-illustrated-transformer.md b/_posts/2018-06-27-illustrated-transformer.md index 6820552d267b4..9621e2a44136a 100644 --- a/_posts/2018-06-27-illustrated-transformer.md +++ b/_posts/2018-06-27-illustrated-transformer.md @@ -82,7 +82,7 @@ As is the case in NLP applications in general, we begin by turning each input wo Each word is embedded into a vector of size 512. We'll represent those vectors with these simple boxes. -The embedding only happens in the bottom-most encoder. The abstraction that is common to all the encoders is that they receive a list of vectors each of the size 512 -- In the bottom encoder that would be the word embeddings, but in other encoders, it would be the output of the encoder that's directly below. The size of this list is hyperparameter we can set -- basically it would be the length of the longest sentence in our training dataset. +The embedding only happens in the bottom-most encoder. The abstraction that is common to all the encoders is that they receive a list of vectors each of the size 512 -- In the bottom encoder that would be the word embeddings, but in other encoders, it would be the output of the encoder that's directly below. The size of this list is a hyperparameter we can set -- basically it would be the length of the longest sentence in our training dataset. After embedding the words in our input sequence, each of them flows through each of the two layers of the encoder.