Member-only story
Explaining Transformers as Simple as Possible through a Small Language Model
And understanding Vector Transformations and Vectorizations
Introduction
I have read countless articles and watched many videos about Transformer networks over these past few years. Most of these were very good, yet I struggled to understand the Transformer Architecture while the main intuition behind it (context-sensitive embedding) was easier to grasp. While giving a presentation I tried a different and more effective way. Hence this article is based on that talk and hoping this will be effective.
“What I cannot build. I do not understand.” ― Richard Feynman
I also remembered that when I was learning about Convolutional Neural Nets, I did not understand it fully till I built one from scratch. Hence I have built a few notebooks, which you can run in Colab and highlights of those are also presented here without cluttering as I feel without this complexity it won't be possible to understand in depth.
Please read this brief article if you are unclear about Vectors in the ML context before you go in.
Everything should be made as simple as possible, but not simpler. Albert Einstein
Before we talk about Transformers and jump into the complexity of Keys, Queries, Values, Self-attention and complexity of Multi-head Attention, which everyone gets sucked into first, let’s take a closer look at…