10 - Large Language Models
This module covers LLM core technologies from Transformer architecture to practical applications.
Module Structure
10-large-language-models/
├── 01-llm-fundamentals/ # LLM Basics
├── 02-pretrained-models/ # Pretrained Models
├── 03-fine-tuning/ # Fine-tuning
├── 04-prompt-engineering/ # Prompt Engineering
├── 05-rag/ # RAG
├── 06-agents/ # Agent Systems
└── 07-alignment/ # Alignment TrainingCore Content
01 - LLM Fundamentals
- Transformer architecture
- Tokenization (BPE, WordPiece)
- Pretraining objectives
02 - Pretrained Models
- GPT series: Autoregressive generation
- BERT: Bidirectional encoding
- LLaMA: Efficient open-source
03 - Fine-tuning
- Full fine-tuning
- LoRA: Low-rank adaptation
- QLoRA: Quantized LoRA
04 - Prompt Engineering
- Zero-shot / Few-shot
- Chain-of-Thought
- ReAct
05 - RAG
- Document chunking
- Vector embeddings
- Vector databases (FAISS, Chroma)
06 - Agent Systems
- Tool calling
- Memory management
- Task planning
07 - Alignment
- RLHF
- DPO