Prompt Kit

Prompt Kit: The AI Memory Wall

This kit turns the article's core framework — contextual stewardship — into three immediately actionable prompts. You'll audit where dangerous context gaps exist in your work, write domain-specific evaluations that encode your judgment as agent guardrails, and start documenting decisions in a way that actually makes agents safer. These work for any domain: engineering, legal, marketing, finance, or anything else where you're handing consequential work to AI.

How to use this kit

These three prompts are designed to work in sequence but each stands alone. Start with Prompt 1 if you haven't thought systematically about where context gaps could hurt you. Jump to Prompt 2 if you already know where the risks are and want to build guardrails now. Use Prompt 3 as an ongoing practice to close the context gap over time. All three work in any AI assistant — ChatGPT, Claude, Gemini — no technical background required. Each prompt will interview you about your specific situation before producing outputs, so you'll get results tailored to your actual work, not generic advice.


Prompt 1: Context Gap Audit

Job: Maps the critical institutional knowledge in your domain that lives only in people's heads — the stuff that could cause an "Alexey moment" if an agent doesn't know it.

When to use: Before deploying agents on consequential work, when onboarding new AI workflows, or as a quarterly review of existing agent-assisted processes.

What you'll get: A prioritized risk map showing exactly where context gaps between your agents and your organization are most dangerous, with specific recommendations for what to document or encode first.

What the AI will ask you: Your role and domain, what AI agents or tools you currently use (or plan to), what work those agents handle, and questions about the unwritten rules, relationship history, and institutional knowledge in your area.


Prompt 2: Domain-Specific Eval Writer

Job: Helps you write concrete evaluations — the checks and guardrails that encode your judgment into something that runs before, during, or after an agent acts. Works for any domain, not just engineering.

When to use: When you've identified a context gap (from Prompt 1 or your own experience) and want to build a practical safeguard. Also useful when handing off an AI-assisted workflow to someone with less context than you.

What you'll get: A set of specific, actionable eval criteria written in plain language — the "things an AI must not get wrong in our specific situation" — plus guidance on when and how to apply them.

What the AI will ask you: The specific workflow or agent task you want to protect, what "right" looks like in your context, what's gone wrong before (or could), and the organizational constraints an agent wouldn't know about.


Prompt 3: Decision Context Documenter

Job: Helps you document decisions in a way that captures the why — the constraints, tradeoffs, relationship dynamics, and organizational context — not just the what. This creates the raw material that makes agents safer and closes the memory wall over time.

When to use: After any significant decision, at the end of a project phase, during team transitions, or as a regular practice (weekly or biweekly) to capture the context that's accumulating in your head.

What you'll get: A structured decision record that captures the invisible institutional context an AI agent would need to avoid making a locally-correct-but-organizationally-wrong move in the future. Written so it's useful to both humans and AI systems.

What the AI will ask you: What decision you made, what alternatives you considered, what constraints and context shaped your choice, and what an outsider (or an agent) would get wrong if they only saw the outcome.