- 13. NLP: Positional Encodings
Feb 16, 2026
•
13 min read
The transformer’s way of remembering who stood where, because attention without order is chaos with confidence.
- 12. NLP: Embeddings
Jan 05, 2026
•
35 min read
The Geometry of Meaning : Turning meaning into maths.
- 11. NLP: Tokenizers
Dec 07, 2025
•
22 min read
A detailed description about tokenizers in LLMs.
- 10. NLP: Mixture of Experts (MOE)
Sep 22, 2025
•
16 min read
A guide on MoEs, their history and implementation in transformers along with the losses associated with it.
- 9. NLP: Optimizing Attention
Aug 05, 2025
•
34 min read
A guide on KV-Caching along with Sliding Window Attention. Ending with MQA and GQA.
- 8. NLP: BERT
Jul 25, 2025
•
7 min read
A guide on BERT models and how their architecture.
- 7. NLP: Transformer Implementation
Jul 18, 2025
•
34 min read
Implementation and trainig of Transformer for En-Hi machine translation task.
- 6. NLP: Transformers
Jun 22, 2025
•
45 min read
A complete guide to the transformer architecture, attention mechanisms, and positional encoding.
- 5. NLP: Attention
May 28, 2025
•
26 min read
A guide on Attention mechanism for seq2seq models.
- 4. NLP: Seq2Seq
May 19, 2025
•
8 min read
A guide on Seq2Seq models that changed machine translation task.
- 3. NLP: Pytorch for NLP
May 10, 2025
•
16 min read
Some tips for working with PyTorch for NLP tasks
- 2. NLP: LSTM & GRU
May 08, 2025
•
5 min read
A guide on LSTM and GRU models and how they overcome limitations of RNNs.
- 1. NLP: Recurrent Neural Networks [RNN]
May 04, 2025
•
13 min read
A guide on the first steps towards modern NLP.