It's Not About the AI — It's About What You Feed It
The Models Are Good. You're the Bottleneck. ## Why Context Management Is the Only AI Skill That Matters
The Hook Everyone's arguing about which model is best. GPT-5 vs Claude vs Gemini. Benchmarks, vibes, "feel." Meanwhile, the same model that can't solve a Scrabble board can also build a full-stack web app to solve every Scrabble board. The difference isn't the model. It's you.
The Scrabble Story
My girlfriend texted a photo of her Scrabble board to Claude and said "what should I play?"
It gave her garbage. Wrong letters, illegal words, hallucinated squares. She showed me and said "see, AI is useless."
I opened Claude Code and said:
1. Here's a Scrabble board image. Here are the rules of Scrabble.
2. OCR this board — extract every tile position into a 15x15 grid.
3. Research the best Scrabble-solving algorithm. Build it in JavaScript.
4. Load the official tournament dictionary (178,000 words).
5. Find the highest-scoring legal move for these rack tiles.
6. Build a web app where anyone can upload their board and get the answer.
Twenty minutes later we had a working Scrabble solver. [Link to live app]
Same model. Same image. Completely different result. The only thing that changed was the context.
What Is Context, Actually?
People call it "prompt engineering" but that's the wrong frame. A prompt is a single message. Context is the entire environment the model operates in:
What you tell it about the world:
- The rules of the game
- The constraints it's working within
- The format you expect back
What you tell it about the task:
- Not "solve this" but the decomposed steps
- Not the answer you want, but the process to get there
- The tools it has available
What you give it to work with:
- Files, images, data
- Previous conversation history
- Memory from past sessions
What you DON'T give it:
- Irrelevant context that dilutes focus
- Contradictory instructions
- Ambiguity where precision matters
The person who said "solve this Scrabble board" gave the model one piece of context: an image. The person who got it right gave it rules, a methodology, tools, and a step-by-step plan. Same model, 10x the context, 100x the result.
The Therapist Story
I've been using Claude to analyze my relationship. Not as therapy — as pattern recognition. I fed it our text messages, gave it context about our history, and asked it to identify patterns, blind spots, and dynamics I might be missing.
At my next therapy session, I handed my phone to my therapist. He read through the analysis, asked some pointed follow-up questions to Claude, and spent about ten minutes going back and forth.
Then he laughed, handed my phone back, and said: "Yeah, this is the take. It did my job."
I jokingly laid down on the couch and waited for him to be done.
Here's what's interesting: the model didn't replace my therapist. But it did the groundwork. It processed months of conversation, identified patterns, and surfaced insights. My therapist — with decades of clinical training — validated it in minutes instead of discovering it over months of sessions.
This is the pattern everywhere:
- AP inbox: AI classifies and extracts, human approves exceptions
- Scrabble: AI finds all legal moves, human picks the strategy
- Therapy: AI identifies patterns, therapist applies clinical judgment
- Audit: AI flags anomalies, auditor investigates the material ones
The model does the volume work. The human does the judgment work. But only if the human knows how to set up the context.
The Three Layers of Context
Layer 1: Session Context — what you give the model right now
- The problem statement, decomposed into steps
- Relevant files, data, images
- Rules, constraints, expected output format
- This is what most people think "prompting" is
Layer 2: Conversation Context — what builds up during a session
- The model's own outputs feeding back as input
- Iterative refinement ("that's close, but adjust X")
- Error correction and learning within the session
- This is why multi-turn conversations beat one-shot prompts
Layer 3: Persistent Memory — what survives between sessions
- What the model learned about you, your preferences, your domain
- Past decisions and their outcomes
- Organizational knowledge, SOPs, institutional context
- This is the frontier — and almost nobody is doing it well
Most people are stuck on Layer 1. They write a clever prompt and wonder why the model doesn't read their mind. Power users operate at Layer 2 — they iterate, refine, build up context through conversation.
The unlock is Layer 3. When an AI agent remembers that your vendor always sends invoices as PDF attachments with the PO number in the subject line, it doesn't need you to explain that every time. When it remembers that invoices over $10K need VP approval, it routes automatically. When it remembers that last quarter's audit flagged duplicate payments from this vendor, it watches more carefully.
Memory is what turns a stateless chatbot into a colleague.
Why This Matters More Than Model Choice
People spend hours debating GPT-5 vs Claude. That's like debating Snap-on vs Craftsman when you don't know how to change a tire.
The model is a tool. A very powerful, very general tool. The skill isn't "pick the right model." The skill is:
1. Decompose the problem — break it into steps the model can execute
2. Provide the right context — give it what it needs, nothing it doesn't
3. Build the right workflow — tools, iteration, feedback loops
4. Persist what matters — memory, learnings, institutional knowledge
This is context management. It's not prompt engineering. It's not AI strategy. It's the boring, practical skill of knowing how to set up the environment so the model can do its best work.
And right now, it's the most valuable skill in tech.
Try It Yourself
[Link to Scrabble Solver] — Upload your board, enter your tiles, get the best move. Built in 20 minutes with Claude Code. The model didn't get smarter. The context did.
Geoffrey Doempke is a finance leader building AI-powered tools for accounting automation. He writes about what actually works at doempke.com.