Part 4: Modern Architectures and Large Language Models
Explore transformer architecture, pre-trained language models, fine-tuning techniques, and efficient inference.
Chapters in This Part
Chapter 13: Transformer Architecture
Self-attention, multi-head attention, positional encoding, and transformer blocks....
Intermediate
Advanced
120 min
Chapter 14: Pre-trained Language Models
BERT, GPT, tokenization, and the Hugging Face ecosystem....
Intermediate
Advanced
90 min
Chapter 15: Fine-tuning and Alignment
LoRA, RLHF, DPO, and parameter-efficient fine-tuning....
Advanced
Expert
120 min
Chapter 16: Efficient Transformer Architectures
Flash attention, MoE, state space models, and inference optimization....
Advanced
Expert
105 min
Chapter 17: Vision Transformers
ViT, Swin Transformer, and vision-only applications....
Intermediate
Advanced
90 min