My Notes

My hand notes while I completed the Deep Learning Specialization on Coursera.

  • Neural Networks and Deep Learning Note.
  • Improving Deep Neural Networks Note.
  • Structuring Machine Learning Projects Note.
  • Convolutional Neural Networks Note.
  • Sequence Models Note.

NLP Papers

A subset of papers that I found useful in clarifying my understanding of various NLP topics.

* A Neural Probabilistic Language Model 
* [Word2Vec]/[Negative Sampling]/[GloVe] 
* Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation 
* Sequence to Sequence Learning with Neural Networks 
* Neural Machine Translation by Jointly Learning to Align and Translate (Paper that introduces Attention) 
* Effective Approaches to Attention-based Neural Machine Translation 
* Attention Is All You Need
* [ELMo]/[BERT]/[RoBERTa] 
* [GPT]/[GPT-2]/[GPT-3]
* T5 (an awesome paper)

Few blog posts/links that I found really useful to understand various fundamental concepts of NLP.

* Andrej Karpathy's coding-based backpropagation post [Link]
* Andrej Karpathy's blog on RNNs [Link]
* Understanding LSTM Networks [Link]
* The Illustrated Word2vec [Link]
* Mechanics of Seq2seq Models With Attention [Link]
* The Illustrated Transformer [Link]
* The Annotated Transformer [Link]
* Visualizing Transformer Language Models [Link]
* The State of Transfer Learning in NLP [Link]
* The Illustrated BERT, ELMo, and co. [Link]
* A Visual Guide to Using BERT [Link]
* Various BERT Pre-Training Methods [Link]