Launching an AI product β pre-launch to post-launch
AI product launches fail at a higher rate than traditional SaaS launches because the failure modes are different: hallucinations on stage, prompt-injection on the launch email, cost spike on the trending-on-X day, quality regression the week after. This planner is the playbook: 17 items across 3 phases, 4 weeks pre-launch, one launch week, and 30 days of post-launch hardening.
Pre-launch (T-8 to T-2 weeks)
The 7 pre-launch items are non-negotiable. Eval set of 100 examples, red-team pass, pricing validated against the cost calculator, rate limits + hard spend caps, kill switch, support macros and escalation path, privacy + DPA posted.
The item teams most often skip is the pricing validation. A B2B SaaS launched an AI feature at $29/mo and lost money on every power user because they modeled cost at P50 tokens, not P95. Fixing it mid-launch is a PR mess; pre-launch it's an Excel fix.
Launch week (Days 1-7)
Gradual rollout: 1% β 10% β 50% β 100%, promoting only when quality + cost gates hold for 24 hours. Hourly dashboard. Daily retro. Ship fixes same-day for Sev1/Sev2. On-call rotation covers AI incidents. Customer comms if public.
The single most valuable practice: a 30-minute daily retro for 7 days. Cost, quality, latency, incident count, user-reported issues. If cost is 2Γ projection, figure out why before promoting. If quality is degrading as users find edge cases, ship prompt fixes same day. The retros compound the first week's learnings and prevent the "mystery post-launch dip" pattern.
Post-launch (Days 8-30)
Weekly AI impact review. Cache hit rate β₯ 70% for hot paths. Model-routing layer added (60-85% cost cut on most workloads). Scheduled quarterly red-team. Pricing iteration based on actual unit economics. Dead features retired.
What goes wrong on launch and how to prevent it
| Failure | Root cause | Prevention |
|---|---|---|
| Hallucination on stage | No red-team for tricky questions | Red-team pass T-2 weeks |
| Cost spike day 1 | Modeled on P50, hit P95 | Hard spend caps + rate limits |
| Prompt injection via launch content | No input sanitization | Treat user input + retrieved content as untrusted |
| Latency regression at scale | Not load-tested | Shadow traffic at 100% for 2 weeks pre-launch |
| Quality degrades week 2 | No eval set, no monitoring | Eval set nightly + weekly review |
What the planner gives you
The interactive plan tracks each item across 3 phases, produces a progress bar, and exports a markdown plan for your tracker. Customize the owners to your team.
- AI Product Launch Cost Calculator β Budget the launch.
- AI SaaS Pricing Calculator β Price it so unit economics work.
- Enterprise AI Security Checklist β Security posture before launch.
- LLM Migration Planner β If you're swapping models as part of the launch.