Prompt Kit
Prompt Kit: Map Your AI Difficulty Axes and Build a Smarter Workflow
This kit operationalizes the three core actions from the article: decompose the types of difficulty in your work, pressure-test whether your current tools match those difficulty types, and sharpen your ability to evaluate AI output. The models have differentiated enough that understanding what kind of hard you're solving changes how you use AI — whether that means getting more from the tool you already have or knowing exactly when a different one earns its place.
How to use this kit
Short on time? Start with the 10-Minute Rapid Audit — it maps your work across difficulty axes, evaluates your current AI usage, and identifies the highest-leverage change you can make this week. Run it in any capable AI assistant (ChatGPT, Claude, Gemini).
Going deep? Work through the three core prompts in order. Prompt 1 (Problem Difficulty Decomposition) produces the foundation — you'll reference its output in Prompts 2 and 3. Prompt 2 (AI Workflow Optimizer) starts with how you're using AI now and identifies where to adjust — which might mean using your current tool differently, or might mean routing specific tasks elsewhere. Prompt 3 (AI Output Taste Builder) identifies where you need to develop sharper judgment. These work best in a thinking-capable model like ChatGPT, Claude, or Gemini, and each takes 15–25 minutes of conversation.
All prompts are copy-paste ready. The AI will ask you for context — just answer its questions and it does the rest.
⚡ 10-Minute Rapid Audit
Job: Produces a quick snapshot of how your work breaks down across difficulty types, where your current AI usage matches or misses, and the single highest-leverage change to make this week — all in one 10-minute conversation.
When to use: You want the practical takeaways without a deep dive. Good for a first pass you can revisit later.
What you'll get: A one-page audit with four sections: your difficulty axis breakdown, a current-tool assessment, your top recommendations (which may be better prompting, not a new tool), and a career durability snapshot.
What the AI will ask you: Your role, industry, 5–7 tasks that fill your typical week, which AI tools you currently use and how, and what feels hardest about your job.
Prompt 1: Problem Difficulty Decomposition
Job: Breaks down your actual work into the six difficulty axes from the article, revealing what's genuinely hard about your job and on which dimension — so you can see which parts AI helps with now, which parts it will help with soon, and which parts remain fundamentally human.
When to use: When you want to understand why your work feels hard, which AI tools address which parts, and where your value is most durable. Best done quarterly as models improve.
What you'll get: A comprehensive difficulty map of your role with time allocation estimates, automation timeline projections for each axis, and a clear picture of where your human leverage is highest.
What the AI will ask you: A detailed walkthrough of a recent challenging work week — specific tasks, what made them hard, and where you spent the most energy.
Prompt 2: AI Workflow Optimizer
Job: Evaluates your current AI usage against the actual difficulty profile of your work — identifying where you're underusing what you have, where a different approach would help more than a different tool, and where a genuine capability gap means you should look elsewhere.
When to use: After you've thought about the types of difficulty in your work (ideally after running Prompt 1), and you want to get more leverage from AI — starting with what you already have.
What you'll get: An honest assessment of your current AI workflow, specific adjustments to try with your existing tools, identification of genuine gaps where a different tool would help, and a one-week testing plan.
What the AI will ask you: Your role, your current AI tools and how you use them, what's working, what's frustrating, and your most common AI-assistable tasks.
Prompt 3: AI Output Taste Builder
Job: Helps you identify where in your domain you most need to develop the skill of evaluating AI-generated output — the "taste" that becomes your most valuable skill as models get better at producing plausible-looking work.
When to use: When you realize the bottleneck has shifted from "can AI do this task" to "can I tell whether what AI produced is actually good." Especially important for professionals whose domains involve high-stakes decisions based on AI-assisted analysis.
What you'll get: A personalized map of where your evaluation skills are strong vs. weak, a set of domain-specific "smell tests" to apply to AI output, and a practice protocol for building sharper judgment.
What the AI will ask you: Your domain, the types of AI output you currently rely on, and examples of times AI output was wrong or misleading in your work.