AI Economy Hub

AI adoption roadmap

A 30/60/90 checklist for rolling out AI across teams — governance, pilots, vendors, measurement.

Loading tool…

Get weekly marketing insights

Join 1,200+ readers. One email per week. Unsubscribe anytime.

Frequently asked questions

1.Is 90 days realistic?

Yes for a mid-sized org with leadership buy-in and named owners. Larger orgs or those starting from Foundation tier may need 180 days to complete the same items — the order still holds.

2.What if we skip the foundations?

Month two fails. Most commonly: pilot blocked at week 9 because legal hasn't reviewed vendors, or IT finds PII leaking to consumer-tier accounts. Foundations are not glamorous; they're load-bearing.

3.How do I pick the first use case?

Measurable baseline (time, cost, error rate), named owner, internal-first if possible, 12-week horizon. Avoid customer-facing in the first pilot unless readiness is high.

4.Should we build or buy for the pilot?

Buy. Building during pilot 1 adds 3-6 months of engineering that you should spend on measurement. Graduate to build during pilot 2 if the vendor economics don't work.

5.What if we don't retire any legacy vendor in 90 days?

The pilot was probably too small. Leadership will ask 'so what did we get for this?' — having one retired vendor is the simplest answer.

A 30/60/90 AI adoption roadmap that actually ships

Most AI adoption plans are decks. This one is a checklist. 18 items across 3 phases, ordered by what needs to be true before the next phase is possible. Used as written, it takes an org from "we should do AI" to "first pilot is in production" in 90 days.

Days 1-30: Foundation

Before any tool is bought, these six items need to be true:

  • Draft AI acceptable-use policy published (not approved — just published).
  • Inventory of current AI usage across teams (expect surprises).
  • Two use cases with measurable baselines (current cost, time, error rate).
  • One accountable owner per use case (a person, not a committee).
  • SSO stood up for the workspace tiers of Claude, ChatGPT, Gemini.
  • Five internal champions trained on prompt basics (not the loudest; the ones who ship).

If month one is skipped, month two will fail. We've watched teams try to launch a customer-facing AI chatbot without an AUP and get blocked by legal at week 9. Days 1-30 are not glamorous; they are load-bearing.

Days 31-60: Pilot

With foundations in place, ship. One pilot to 10% of target users behind a feature flag. Cost tracking by workload. Eval set of 50 golden examples versioned in git. Weekly AI metrics review (30 minutes). DPAs signed with every vendor.

The most common failure mode here is "we shipped and forgot to measure." The weekly review is the forcing function. 30 minutes. Cost, pass rate, escalation rate, incidents. That's it. The reviews compound.

Days 61-90: Scale

Expand to 100% of target users if quality gates held. Add a model-routing layer (Haiku/Flash for easy, Sonnet/GPT-5 for medium, Opus/o4 for hard). Lock in prompt caching for high-volume workloads. Publish a quarterly impact report. Retire one legacy vendor whose job AI now does (if you can't name one, the pilot was too small). Schedule pilot 2.

Common pitfalls and how to avoid them

  • Buying tools before the AUP. Reverse the order — policy first, then buy.
  • Committees as owners. A committee is not an owner. Assign a person.
  • Skipping baselines. Without a baseline, "pilot success" is a guess.
  • No kill switch. Feature flags are not optional for AI launches.
  • Ignoring shadow IT. Someone on your team is using ChatGPT free tier for customer data right now. Talk to them, don't punish them; migrate them to the sanctioned tier.

Measuring success

The 90-day exit test: one pilot in production, measurable impact, cost per value tracked, one retired legacy vendor, and a staffed champion team. If you can't check those 5 boxes, extend the roadmap — don't start pilot 2.

Keep going

More free tools