A 30/60/90 AI adoption roadmap that actually ships
Most AI adoption plans are decks. This one is a checklist. 18 items across 3 phases, ordered by what needs to be true before the next phase is possible. Used as written, it takes an org from "we should do AI" to "first pilot is in production" in 90 days.
Days 1-30: Foundation
Before any tool is bought, these six items need to be true:
- Draft AI acceptable-use policy published (not approved — just published).
- Inventory of current AI usage across teams (expect surprises).
- Two use cases with measurable baselines (current cost, time, error rate).
- One accountable owner per use case (a person, not a committee).
- SSO stood up for the workspace tiers of Claude, ChatGPT, Gemini.
- Five internal champions trained on prompt basics (not the loudest; the ones who ship).
If month one is skipped, month two will fail. We've watched teams try to launch a customer-facing AI chatbot without an AUP and get blocked by legal at week 9. Days 1-30 are not glamorous; they are load-bearing.
Days 31-60: Pilot
With foundations in place, ship. One pilot to 10% of target users behind a feature flag. Cost tracking by workload. Eval set of 50 golden examples versioned in git. Weekly AI metrics review (30 minutes). DPAs signed with every vendor.
The most common failure mode here is "we shipped and forgot to measure." The weekly review is the forcing function. 30 minutes. Cost, pass rate, escalation rate, incidents. That's it. The reviews compound.
Days 61-90: Scale
Expand to 100% of target users if quality gates held. Add a model-routing layer (Haiku/Flash for easy, Sonnet/GPT-5 for medium, Opus/o4 for hard). Lock in prompt caching for high-volume workloads. Publish a quarterly impact report. Retire one legacy vendor whose job AI now does (if you can't name one, the pilot was too small). Schedule pilot 2.
Common pitfalls and how to avoid them
- Buying tools before the AUP. Reverse the order — policy first, then buy.
- Committees as owners. A committee is not an owner. Assign a person.
- Skipping baselines. Without a baseline, "pilot success" is a guess.
- No kill switch. Feature flags are not optional for AI launches.
- Ignoring shadow IT. Someone on your team is using ChatGPT free tier for customer data right now. Talk to them, don't punish them; migrate them to the sanctioned tier.
Measuring success
The 90-day exit test: one pilot in production, measurable impact, cost per value tracked, one retired legacy vendor, and a staffed champion team. If you can't check those 5 boxes, extend the roadmap — don't start pilot 2.
- AI Readiness Assessment — Tier yourself before starting the roadmap.
- AI Governance Checklist — The policy + risk checklist you need in month one.
- Enterprise AI Security Checklist — Operational controls for the pilot.
- AI Spend Tracker — Stand up cost tracking in month one.