Member-only story

Explaining Transformers as Simple as Possible through a Small Language Model

And understanding Vector Transformations and Vectorizations

Alex Punnen
Towards AI
23 min readFeb 14, 2025

--

Introduction

I have read countless articles and watched many videos about Transformer networks over these past few years. Most of these were very good, yet I struggled to understand the Transformer Architecture while the main intuition behind it (context-sensitive embedding) was easier to grasp. While giving a presentation I tried a different and more effective way. Hence this article is based on that talk and hoping this will be effective.

“What I cannot build. I do not understand.” ― Richard Feynman

I also remembered that when I was learning about Convolutional Neural Nets, I did not understand it fully till I built one from scratch. Hence I have built a few notebooks, which you can run in Colab and highlights of those are also presented here without cluttering as I feel without this complexity it won't be possible to understand in depth.

Please read this brief article if you are unclear about Vectors in the ML context before you go in.

Everything should be made as simple as possible, but not simpler. Albert Einstein

Before we talk about Transformers and jump into the complexity of Keys, Queries, Values, Self-attention and complexity of Multi-head Attention, which everyone gets sucked into first, let’s take a closer look at…

--

--

Published in Towards AI

The leading AI community and content platform focused on making AI accessible to all. Check out our new course platform: https://academy.towardsai.net/courses/beginner-to-advanced-llm-dev

Written by Alex Punnen

SW Architect/programmer- in various languages and technologies from 2001 to now. https://www.linkedin.com/in/alexpunnen/

Responses (15)

What are your thoughts?