Menu

Available Courses

Inside the Transformer

Inside the Transformer

Demystify the Transformer architecture — the engine behind every modern LLM. Understand the forward pass, attention mechanisms, generation strategies, and the architectural innovations that enable models like Llama 3.

Tokens & Embeddings Deep Dive

Tokens & Embeddings Deep Dive

Deep dive into tokenizer internals, cross-model comparison, embedding training, and the encoder/decoder split — for learners who want to go beyond the fundamentals.

Embeddings & Semantic Search

Embeddings & Semantic Search

Understand embeddings, semantic similarity, and vector databases — the foundation for RAG and semantic search.

Retrieval-Augmented Generation

Retrieval-Augmented Generation

Build RAG systems: chunking strategies, retrieval pipelines, prompt integration, and evaluation.

Advanced RAG

Advanced RAG

Master 11 advanced RAG strategies — from re-ranking and semantic chunking to knowledge graphs, agentic retrieval, and fine-tuned embeddings.

AI Agents

AI Agents

From the agentic loop to multi-agent orchestration — understand what AI agents are, how they call tools, the design patterns that shape them, and how to coordinate multiple agents.

Claude Code Agent Teams

Claude Code Agent Teams

Master parallel AI collaboration with agent teams. Learn to coordinate multiple Claude instances for complex debugging, code reviews, and cross-layer development.

Voice AI Engineering

Voice AI Engineering

Build real-time voice agents: audio pipelines, turn-taking, latency engineering, and the modern orchestration stack.

Model Context Protocol Fundamentals

Model Context Protocol Fundamentals

Learn the core MCP concepts, architecture, transports, and FastMCP development workflow.

AI Security & Guardrails

AI Security & Guardrails

Protect AI systems from adversarial attacks, implement input/output guardrails, and scope permissions for tool-using agents.

Evals & Observability

Evals & Observability

Build evaluation pipelines, production monitoring, and feedback loops that keep AI systems reliable and improving.

Fine-Tuning & Alignment

Fine-Tuning & Alignment

Go beyond prompting — adapt generative models to your domain using SFT, QLoRA, RLHF, and DPO.