AI Roadmap · Level 01

AI Foundations

← Back to roadmap

What is AI?

Artificial Intelligence is a broad term for systems that can perform tasks normally requiring human intelligence — recognising patterns, understanding language, making decisions, generating content.

For most of its history, AI was rule-based: programmers wrote explicit instructions for every situation. These systems were useful but brittle — anything outside the rules broke them. Modern AI is different: instead of writing rules, we train systems on enormous amounts of data and let them discover patterns themselves.

The key shift: we went from telling computers how to do things, to showing them millions of examples and letting them figure it out.

Quick check — for each scenario, decide: Is this AI?

Smart Thermostat

Turns on heating at 7am every weekday, exactly as programmed.

Not AI. This is rule-based automation — no learning, no pattern recognition. It does exactly what it was told, every time.
📧

Spam Filter

Improves over time based on what you mark as spam or not-spam.

Yes, AI! It learns from labelled examples (spam / not-spam) to recognise patterns — that's machine learning.
💬

AI Chatbot

Reads your question and generates a helpful, human-sounding reply.

Yes, AI! A Large Language Model trained on billions of text examples to understand and generate language.
🧮

Calculator

Computes 2 + 2 = 4. Always. Exactly as programmed.

Not AI. Pure deterministic logic — same input always gives the same output. No learning, no patterns.

What is an LLM?

A Large Language Model is AI trained on massive amounts of text. Claude, ChatGPT, Gemini — all LLMs. They can read, write, summarise, translate, explain, and reason in natural language.

❌ Rule-based system

IF email contains "free money"
  → mark as spam
IF sender is in contacts
  → mark as safe
ELSE → ??

Breaks as soon as spammers change tactics

✅ Machine learning

Show model 10 million emails
labelled spam / not-spam

Model learns patterns
automatically

Adapts to new tactics

Gets better as it sees more data

How LLMs work

You don't need to understand the maths — but a solid mental model makes you a far better AI user. Here's the full picture, step by step.

The full pipeline — watch each stage light up as you read.

📚 Training Data Billions of text examples
🔄 Training Predict next token, adjust weights
🧠 Model Weights Billions of parameters
at inference
💬 Your Prompt Tokenised input
Inference Model predicts token by token
📝 Response Streamed back to you

Tokens — the basic unit

LLMs don't process text word by word. They use tokens — chunks of text roughly ¾ the size of a word. The word unbelievable might be 3 tokens; cat is 1. Punctuation and spaces are often separate tokens too.

This matters because models have a context window — a maximum number of tokens they can hold in "memory" at once. Think of it as a desk: once it's full, things start falling off the edge.

Token visualiser — pick an example to see how it breaks into tokens.

Generation — one token at a time

When you send a message, the model doesn't write the whole reply at once. It predicts one token, appends it, predicts the next, and so on — that's why responses stream in. Each token is the statistically most likely continuation given everything before it.

This is also why hallucinations happen: the model is always generating the most plausible-sounding continuation, not necessarily the true one.

Prompt

"Explain what a token is in one sentence."

Model response — token by token

Temperature — precision vs creativity

Temperature is a setting that controls how random the model's outputs are. Low temperature = focused and predictable. High temperature = creative and varied.

Drag the slider to see how temperature changes the same response.

Precise 0 1 Creative
0.2 Precise
Move the slider above to see an example output.

Prompt: "Describe the speed of light."

Key vocabulary

Know these terms and you'll be able to follow any AI conversation. Click each card to reveal the definition.

🔤

Token

tap to reveal

The basic unit of text an LLM processes. Roughly ¾ of a word on average.

📏

Context window

tap to reveal

The max amount of text (in tokens) the model can "see" at once — its working memory.

🌡️

Temperature

tap to reveal

Controls randomness. Low (≈0) = focused. High (≈1) = creative and varied.

👻

Hallucination

tap to reveal

When a model confidently produces information that is incorrect or made up.

🔢

Embedding

tap to reveal

A numerical representation of text that captures semantic meaning. Powers search and RAG.

🎛️

Fine-tuning

tap to reveal

Training a pre-existing model further on a specific dataset to specialise its behaviour.

📋

System prompt

tap to reveal

Instructions given to the model before the conversation — sets its behaviour and persona.

⚙️

Parameters

tap to reveal

The numerical values inside a model encoding everything it learned during training.

What AI can (and can't) do

Click each card to reveal the verdict. Some answers might surprise you.

💻

Write code

click to reveal

✅ Strong

Excellent at generating, explaining, and debugging code across most languages.

📰

Know today's news

click to reveal

❌ Can't

Has a training cutoff. Doesn't know recent events without web-search tools.

🌍

Translate language

click to reveal

✅ Excellent

High quality across major languages, including nuance and tone.

🧮

Complex maths

click to reveal

⚠️ Unreliable

Can handle simple maths but makes errors on multi-step calculations. Always verify.

🧠

Remember last week

click to reveal

❌ Can't

No memory across sessions by default. Each conversation starts fresh.

📖

Summarise documents

click to reveal

✅ Strong

One of its best use cases — extract key points from long text at any scale.

Guarantee accuracy

click to reveal

❌ Never

Models hallucinate. Confident ≠ correct. Always verify important facts.

💡

Brainstorm ideas

click to reveal

✅ Excellent

Generates diverse ideas quickly. Great for overcoming blank-page paralysis.

You're ready for Level 02

You now understand what AI is, how LLMs work, what tokens are, and what to realistically expect. The next level teaches you how to communicate with AI effectively — which is where most of the practical value comes from.

Up next — Level 02

Prompt Engineering →

Five techniques that dramatically improve AI output, regardless of the model.

Start Level 02 →