Reinforcement Learning from Human Feedback

reinforcement learning from human feedback

Until a few years ago, the most advanced language models we had were GPT-2 and BERT. GPT-2 was the most advanced auto-regressive decoder based model that was suitable for Text Generation. The model T5 was state of the art for other tasks like Translation, Summarization. These models have been a great starting point but were … Read more

Few Shot Learning and Zero Shot Learning

Picture showing zero shot learning vs few shot learning

The most used terms in NLP these days are Large Language Models(LLM). Few shot learning and zero shot learning are transfer learning techniques that are used to make most of the vast pre-trained knowledge of these LLMs. In this blog post, let us understand what these terms mean and see them in action. Zero shot … Read more

Top 3 Attention Mechanisms in Large Language Models(LLMs)

Transformers have changed the way Natural Language Processing(NLP) tasks are performed over the last few years. The Self-Attention mechanism without the recurrence operation is the key to this success. Self-attention is the foundational block of the Transformer architecture. Self-attention is a concept based on the attention mechanism introduced in the paper by Bahdanau. It can be … Read more

Insert math as
Block
Inline
Additional settings
Formula color
Text color
#333333
Type math using LaTeX
Preview
\({}\)
Nothing to preview
Insert