Skip to content

ltrujello/Transformer

Repository files navigation

Transformer From Scratch

This is a PyTorch implementation of the Transformer model. I wrote this for my own understanding, but it is test-driven (see tests) and is meant to be clean and efficient. This code is also documented and explained in my blog post here.

For this implementation, we mostly follow the original architecture from Attention is All You Need, although we follow Pre-Layer Normalization as it has been shown (Lei Ba et. al.) to lead to better training than the originally proposed Post-Layer Normalization architecture.

The main Transformer code is in this file.

About

Transformer from scratch in PyTorch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published