Phase 1 · Week 1-2
Foundations
Understand how LLMs behave, how prompts shape outputs, and how context becomes an engineering resource.
AI Learning
A structured path through LLM fundamentals, context, RAG, agents, tooling, inference, LLMOps, evals, safety, multimodal systems, and production AI architecture. Built for daily study, public reference, and actual implementation — not tutorial-hoarding cosplay.
Duration
100
daily study blocks
Topics
13
dedicated pages
Phases
5
learning arcs
Builds
39
practice ideas
Phase 1 · Week 1-2
Understand how LLMs behave, how prompts shape outputs, and how context becomes an engineering resource.
Phase 2 · Week 3-5
Move from chat demos into retrieval, tools, structured outputs, and knowledge systems.
Phase 3 · Week 6-7
Design workflows that plan, call tools, recover from errors, and run with human checkpoints.
Phase 4 · Week 8-11
Learn inference, observability, evaluations, safety, cost control, and production architecture.
Phase 5 · Week 12-13
Work with multimodal AI and ship user-facing AI features with resilient application architecture.
Daily path
Each topic page contains what to learn, a daily plan, practice builds, and resource links. The order intentionally moves from model behavior → retrieval → agents → production reliability.
01 · Days 1-7
Start with the mechanics behind modern language models from an engineering lens: tokens, model families, sampling, streaming, structured outputs, tool use, multimodal inputs, embeddings, and reasoning models.
02 · Days 8-13
Learn prompt design as software design: message roles, few-shot examples, persona boundaries, templating, output parsers, and prompt injection defense.
03 · Days 14-20
Context engineering is the discipline of managing system prompts, conversation history, memory, tool results, retrieval, and compression under real context-window limits.
04 · Days 21-30
Retrieval augmented generation connects models to private knowledge. Learn chunking, vector databases, hybrid search, reranking, metadata filters, GraphRAG, document parsing, and multimodal retrieval.
05 · Days 31-39
Move beyond single prompts into systems that plan, act, observe, delegate, retry, and preserve state across long-running tasks.
06 · Days 40-45
Tool use turns a model from a text generator into an operator. Learn schema design, parallel tool calls, MCP, sandboxed execution, API wrappers, and tool result formatting.
07 · Days 46-54
Inference engineering is where model quality meets infrastructure reality: hosted APIs, self-hosting, vLLM, TGI, SGLang, Ollama, llama.cpp, quantization, caches, batching, and cost-performance trade-offs.
08 · Days 55-61
Production AI needs traces, token accounting, model and prompt versions, A/B tests, replay tools, drift detection, and feedback loops.
09 · Days 62-70
Evaluation engineering turns subjective AI behavior into testable product quality using golden datasets, LLM judges, RAG metrics, agent trajectory analysis, synthetic data, online evals, and red teaming.
10 · Days 71-77
AI product margins depend on routing, caching, batching, distillation, compression, token budgets, and model gateways.
11 · Days 78-85
AI security is not a checkbox. Learn prompt injection, jailbreak resistance, tool-call exfiltration, moderation, guardrail frameworks, PII redaction, abuse prevention, schema validation, and audit logging.
12 · Days 86-93
Multimodal systems combine text with vision, audio, documents, generated images, video, and realtime experiences. Learn the capabilities and engineering constraints before building shiny chaos.
13 · Days 94-100
AI app architecture connects model calls to product systems: streaming, background jobs, resumable agent runs, tenant-level key management, fallback chains, feature flags, and APIs for cancel/retry/partial results.