Skip to main content

Introduction

Since its introduction via the original transformer paper (Attention Is All You Need), self-attention has become a cornerstone of many state-of-the-art deep learning models, particularly in the field of Natural Language Processing (NLP). In this article, we will delve into how self-attention works and implement it step-by-step from scratch.

Self-Attention

The concept of „attention“ in deep learning originated to enhance Recurrent Neural Networks (RNNs) for handling longer sequences or sentences. For example, consider translating a sentence from one language to another. Translating word-by-word isn’t effective, as context plays a crucial role in understanding the meaning.

To address this, attention mechanisms were introduced, giving access to all sequence elements at each time step. The key is to be selective and determine which words are most important in a specific context. In 2017, the transformer architecture introduced a standalone self-attention mechanism, eliminating the need for RNNs altogether.

Self-attention enhances the information content of an input embedding by including information about the input’s context. This mechanism enables the model to weigh the importance of different elements in an input sequence and dynamically adjust their influence on the output.

In this article, we focus on the original scaled-dot product attention mechanism (referred to as self-attention), which remains the most popular and widely used attention mechanism in practice.

The Self-Attention Mechanism: A Building Block of LLMs

The self-attention mechanism, essential to Large Language Models (LLMs), assigns attention weights to input elements based on their relevance, enabling the model to capture context and dependencies effectively. In a sentence like „The cat sat on the mat,“ the mechanism gives higher weights to related words such as „cat“ and „mat,“ creating context vectors that highlight important relationships.

For each word, self-attention calculates query, key, and value vectors using learned weights. The attention score is derived from the dot product of query and key vectors, scaled and processed through softmax to obtain weights. Each word’s context vector is then a weighted sum of the value vectors based on these weights.

Multi-Head Attention

Multi-head attention enhances the self-attention mechanism by using multiple attention heads to process data simultaneously, each capturing different traits and associations. This allows the model to understand the input material comprehensively, enabling accurate and efficient execution of complex linguistic tasks.

Instead of a single set of attention weights, multi-head attention employs multiple sets, each with its own trainable weight matrices. This enables the model to focus on various aspects of the input sequence concurrently, capturing a diverse and comprehensive collection of features and associations.

For instance, in the sentence „The quick brown fox jumps over the lazy dog,“ different attention heads might focus on distinct relationships: one on the subject-verb relationship between „fox“ and „jumps,“ and another on the descriptive relationship among „quick,“ „brown,“ and „fox.“ The model combines these outputs to produce a detailed representation of the input sequence.

Multi-head attention works by concatenating the outputs of each attention head and performing a final linear transformation, creating a cohesive representation from the diverse features captured by each head. This mechanism enhances the model’s ability to handle longer sequences and complex linguistic structures.

The accompanying image illustrates the multi-head attention mechanism, showing how several sets of attention heads, each with its own trainable weight matrix, function in parallel. This approach captures varied features and associations, producing a comprehensive representation and improving the model’s capacity to manage complex linguistic tasks.

Coding and Embedding an Input Sentence

Let’s consider an input sentence, „Life is short, eat dessert first,“ and put it through the self-attention mechanism. We create a sentence embedding first.

Step 1: Tokenize the Sentence

Output:

Step 2: Convert Tokens to Integers

Output:

Step 3: Create Embeddings

We use an embedding layer to encode the inputs into a real-vector embedding.

Output:

Defining the Weight Matrices

Self-attention utilizes three weight matrices: Wq, Wk, and Wv, which are adjusted as model parameters during training. These matrices project the inputs into query, key, and value components of the sequence, respectively.

Step 1: Initialize Weight Matrices

Step 2: Compute the Unnormalized Attention Weights

Let’s calculate the attention vector for the second input element, which acts as the query here:

Output:

Generalize this to calculate the remaining key and value elements for all inputs:

Output:

Compute the unnormalized attention weights:

Output:

Compute the ω values ​​for all input tokens:

Output:

Computing the Attention Scores

Normalize the unnormalized attention weights to obtain the normalized attention weights by applying the softmax function. Additionally, scale ω by 1/dk−−√ before normalizing it.

Output:

Compute the context vector, which is an attention-weighted version of our original query input, including all the other input elements as its context via the attention weights:

Output:

Repeat the same steps for all six input elements to calculate their attention-weighted context vectors.

Output:

Conclusion

Self-attention is a powerful mechanism enabling models to dynamically focus on different parts of an input sequence. By implementing self-attention from scratch, we gain a deeper understanding of its inner workings and appreciate its significance in modern deep learning architectures. In future posts, we’ll explore more advanced topics related to transformers and their applications.