Prompt Kit

Prompt Kit: Opus 4.7 Migration Playbook

Three prompts that take you from "should I switch?" to "my stack is reliable." The first audits your current setup for breakage. The second quantifies what the tokenizer change and adaptive thinking actually cost you. The third designs a peer review loop so neither model's self-assessment biases can burn you. Use them in sequence — fix what's broken, understand what it costs, then build the reliability layer.

How to use this kit

Prompt 1 (Migration Pre-Flight) is a five-minute triage. Paste your system prompt, API parameters, and routing setup. The AI identifies every breaking change, flags prompts that relied on 4.6's implicit inference, and gives you a Monday-morning action list. Run this in any capable model — ChatGPT, Claude, or Gemini all work.

Prompt 2 (Cost Impact Estimator) turns your usage data into a real cost projection. Feed it your use case mix and approximate token volumes. It estimates the combined tokenizer tax and adaptive thinking burn, then tells you where the model's efficiency gains (fewer loops, better persistence) offset the higher per-token cost and where they don't. Best in a thinking-capable model like ChatGPT, Claude, or Gemini so the math is reliable.

Prompt 3 (Peer Review Workflow Builder) is the one that outlasts the article. Describe your agentic pipeline — what it does, what it produces, what the stakes are — and get back a complete peer review architecture with model assignments, scoring rubrics, failure signatures, and handoff structure. Run in whichever model you trust for systems design.

All three prompts gather context conversationally. Paste them in and start talking.


Prompt 1: Migration Pre-Flight Check

Job: Audit your current Claude/API setup and produce a specific list of what breaks, what to change, and what to test before switching to Opus 4.7.

When to use: Before you flip any production workflow or API integration to Opus 4.7. Monday morning, five minutes.

What you'll get: A categorized action list — hard breaks (will cause errors), soft breaks (will degrade output quality), routing changes, prompt rewrites needed, and a prioritized test plan.

What the AI will ask you: Your current system prompt (or a summary), API parameters you're passing, what effort levels you use, what models you route to and for what tasks, and whether you're on the API, Claude.ai chat, or Claude Code.


Prompt 2: Cost Impact Estimator

Job: Estimate the real cost delta of moving to Opus 4.7, accounting for the tokenizer tax, adaptive thinking burn, and efficiency gains — then flag where costs go up, where they go down, and where cap issues are structural vs. fixable.

When to use: Before migrating, or after migrating when your bill looks wrong. Also useful for $20/month subscribers trying to understand why they're hitting caps faster.

What you'll get: A use-case-by-use-case cost breakdown with estimated multipliers, net impact projections, and specific recommendations for where to optimize vs. where to route elsewhere.

What the AI will ask you: Your use case mix, approximate token volumes or usage patterns, current model and tier, whether you're on API or subscription, and what effort levels you typically use.


Prompt 3: Peer Review Workflow Builder

Job: Design a complete peer review architecture for your agentic pipeline — which model checks which, what to score on, what failure signatures to watch for, and how to structure handoffs so review catches what self-review misses.

When to use: Before you hand an agent anything that matters. Especially if your pipeline involves data processing, financial numbers, document reasoning, or any output that a downstream human or system will trust without re-verifying every line.

What you'll get: A peer review system design tailored to your specific pipeline, with model assignments, scoring rubrics, failure signature detection, handoff protocols, escalation triggers, and implementation guidance.

What the AI will ask you: What your agent does, what it outputs, what the stakes are for errors, which models you have access to, and your current review process (if any).