← All masterclasses
Masterclass · Intermediate

A Deeper Understanding of LLMs

Stop guessing. Start steering the model.

Most developers use LLMs by trial and error. This masterclass unpacks what's actually happening inside the box — tokens, context windows, sampling, tool use, caching — so you can write prompts and build systems that work reliably.

Duration
1 day · 2-3 hours
Level
Intermediate
Format
Slides + live demo on Zoom / Google Meet
What you'll walk away with

By the end of the session.

A concrete mental model of how LLMs process your input

Techniques to debug why a prompt isn't working

Patterns for structured output, tool use, and caching

Syllabus

What we'll cover.

  1. 1Tokens, context windows, and why they matter
  2. 2Sampling: temperature, top-p, and when each matters
  3. 3Structured output: JSON mode, tool use, schema design
  4. 4Prompt caching and cost optimization
  5. 5Debugging: what to do when the model 'just won't listen'
Prerequisites

What you need before you join.

  • You've used an LLM API at least once
  • Comfortable reading code in any language
Tools we'll use

Bring these, or install during setup.

Anthropic / OpenAI APIA notebook environment
Reserve your spot

Join the waitlist for A Deeper Understanding of LLMs.

We'll email you when the next cohort opens. Early signups get launch pricing.