AI Economy Hub

AI ROI calculator

Return on investment from AI tooling spend versus time saved and new revenue.

Loading calculator…

Get weekly marketing insights

Join 1,200+ readers. One email per week. Unsubscribe anytime.

Frequently asked questions

1.What counts as 'new revenue'?

Revenue that wouldn't exist without the AI tool β€” more deals closed, faster launches, new products enabled.

AI ROI is almost always misstated β€” here's the honest model

Most AI ROI calculations are theater. A vendor touts 10Γ— ROI, a consultant certifies it, and six months later your finance partner asks why the cost base did not move. The gap between claimed and realized ROI on AI tooling is, in our observation across dozens of rollouts, consistently 50–70%. Not because the tools are fraudulent β€” they usually do save time β€” but because the ROI model leaves out adoption curves, integration cost, quality-loss risk, and the opportunity cost of the rollout itself.

The fix is not more sophisticated math; it is honest accounting. Every AI rollout has six cost components (license, implementation, training, integration, quality-risk, opportunity cost) and three value components (direct time savings, throughput lift, quality improvement). Vendors quote the first value component against the first cost component and call it ROI. The full picture usually moves year-one ROI from "300%" to "40–150%", and year-two to the genuinely impressive 150–350% range. Both sets of numbers can be "right"; they are just answering different questions.

This article walks the honest model line by line, shows three rollouts at different scales with real numbers, and calls out the measurement traps that cause year-one ROI to get overstated. The goal is not pessimism; well-executed AI rollouts deliver strong returns. The goal is to separate the 80% of deployments that work from the 20% that quietly eat budget for three years before anyone kills them.

The five-component ROI equation

Total realized annual ROI = (A) realized time savings Γ— loaded labor rate Γ— adoption rate βˆ’ (B) license cost βˆ’ (C) implementation cost (one-time, amortized) βˆ’ (D) integration & training cost (ongoing) βˆ’ (E) risk-adjusted downside (quality errors, rework). Most public ROI claims cover only (A) minus (B).

Line itemTypical magnitudeHow it is usually missed
License (B)$10–$50/user/moThe easy part. Known.
Time savings (A)Claimed 4–8 hr/wk; realized 1.5–3 hr/wkAdoption Γ— review time
Implementation (C)1–3 months Γ— $10k–$200kNot amortized against year-one ROI
Training / change mgmt (D)$50–$500/user one-time + ongoingSkipped entirely
Quality risk (E)2–10% rework on AI outputAssumed to be zero
Opportunity cost$20k–$200k of eng time on rolloutNever counted

ROI by tool category, realistic

Tool categoryYear-1 ROI rangeYear-2 ROI range
AI coding assistants (Cursor, Copilot)80–180%200–400%
AI meeting notes (Granola, Fathom, Fireflies)150–300%200–350%
Customer support copilots (Zendesk AI, Intercom Fin)60–150%200–500%
Marketing copy (Claude, Jasper)100–250%150–300%
Custom agents (internal use)-20% to +80%100–400%
Enterprise chatbot (HR, IT helpdesk)-40% to +100%100–300%

Custom agents and internal chatbots land last because they have high year-1 build cost and adoption that takes 6–12 months to reach steady state. They are typically the highest year-2+ ROI once you get through the valley.

The measurement plan nobody writes

Decide three metrics before launch. A leading indicator (weekly active usage per eligible user), a throughput indicator (tasks completed per week per user), and a quality indicator (rework rate, error rate, NPS). Measure baselines for 4 weeks pre-launch. Measure at 30, 60, and 90 days post. If the numbers are not there by day 90, kill or fix β€” do not let a rollout coast on vibes.

Three deployments at different scales

  • Small startup (35 people) deploying a coding copilot:License: $19/user Γ— 35 Γ— 12 = $7,980/year. Implementation: minimal β€” 1 week of an engineer's time evaluating, maybe $5k. Training: nominal. Time savings at realistic 15% velocity lift on 60% coding-time workers = roughly 4 hr/week/engineer. At $95/hr loaded Γ— 25 engineers Γ— 4 Γ— 48 = $456k/year of time. Quality risk: minimal with good code review.Year-1 ROI: ~5,600% on license, still ~3,100% after implementation.
  • Mid-market B2B SaaS (180 people) deploying an enterprise chat platform:License: $30/user Γ— 180 Γ— 12 = $64,800/year. Implementation: $45k (SSO, data connectors, rollout). Training + change mgmt: $25k. Time savings: 2 hr/week at 60% adoption on 140 knowledge workers = $790k of time value at $70/hr. Quality risk: low. Year-1 ROI: ~470%. Year-2 climbs to ~700% as adoption matures.
  • Enterprise finance (2,400 people) deploying a RAG assistant on internal policy: Build: $340k. Infra: $14k/month = $168k/year. Training + change mgmt: $120k. Time savings: 1.5 hr/week Γ— 1,800 eligible users Γ— 50 weeks Γ— $85/hr Γ— 45% adoption = $5.16M. Year-1 ROI: ~710%. This is the archetypal enterprise internal-tool ROI β€” big numbers on both sides, strongly positive net.

When to kill a rollout

Concrete kill criteria worth writing into the plan:

  • Weekly active use of core workflow below 25% at week 8.
  • No measurable throughput change on the target workflow at day 90.
  • Rework or error rate up more than 5pp on affected tasks.
  • Sales cycle or CX metrics that the tool was supposed to help moving in the wrong direction.
  • Team morale surveys showing consistent negative sentiment after the initial novelty wears off.

Killing a rollout cleanly is a sign of operational maturity. Letting it zombie-run for a year is how AI line items accumulate without producing value.

Why year-2 is where the real ROI sits

First-year ROI includes the one-time setup cost and reflects pre-steady-state adoption. Year-2, with setup amortized and adoption stable, typically doubles the ROI number. This is why "our year-1 ROI is only 80%" is not necessarily a failure β€” if the adoption curve and quality signals are right, year-2 will land at 150–250% and the investment pays off over the tool's useful life. Budget accordingly.

The opportunity-cost component everyone forgets

An AI rollout consumes engineering, ops, and leadership attention. 2–3 engineer-months of integration, 1–2 executive meetings a month of steering, a month of L&D time. That attention is coming from somewhere β€” usually from product velocity or other growth initiatives. For a rollout to be net-positive, the realized ROI has to beat both the direct costs and what that attention would have produced elsewhere.

What real companies actually track (not what they pitch)

Notion AI, Cursor, and Glean all publish ROI case studies that read like marketing copy. Talk to the finance teams that actually bought them and the story is more nuanced. Notion AI's internal numbers focus on retention of paid plans, not time saved β€” because the moment AI becomes a feature, churn economics dominate. Cursor's enterprise sales deck leads with "fewer context switches" because raw LOC or PR count flattered the metric in ways CTOs dismissed. Glean's enterprise deploys measure "queries answered without opening a new tab" because that is the behavior change that actually correlates with reported productivity gains. The pattern: sophisticated buyers pick a behavior metric that is hard to game, and they measure it relentlessly.

The attribution problem

The biggest ROI-measurement challenge is counterfactual: you only have one timeline, so you cannot know what would have happened without AI. The honest method is a staggered rollout with a control group β€” team A gets the tool in Q1, team B in Q2, team C in Q3. Compare deltas. Most orgs skip this because it feels slow, then fight about attribution for a year. The staggered approach is a two-month slower rollout that buys a decade of clarity on ROI. Do it.

Risk-adjusted ROI: the hidden quality tax

Most ROI models silently assume AI output quality equals human output quality. In practice, for content-creation tasks, AI-produced work has a 3–8% error rate (wrong facts, stylistic drift, hallucinated citations) that requires remediation downstream. For code, that figure is closer to 10–15% on non-trivial changes even with Claude Sonnet 4.5 or GPT-5. Model these as rework costs: 5% of hours saved reassigned to fixing AI output. If you are claiming 4 hours/week saved, budget 10–15 minutes/week of that back for error correction. The ROI is still strong; the math is just honest.

Red flags in vendor ROI claims

  • Percentage numbers without a denominator. "80% time saved" on what task, at what adoption, over what time window?
  • Case studies without named customers. Anonymous "Fortune 500 enterprise" is marketing, not evidence.
  • Year-1 ROI claimed without one-time costs. Nobody has 500% year-1 ROI after honest accounting.
  • No control group in the methodology. Productivity measurement without a counterfactual is storytelling, not measurement.
  • ROI claimed on "soft" metrics (employee satisfaction, innovation capacity) that cannot be translated to dollars. These may be real, but they are not ROI.

Frequently asked questions

What is a realistic first-year ROI? Positive ROI in year 1 is common but not universal. 0–150% range is typical; larger numbers are usually for tools with low integration cost.

Which category has the most predictable ROI? Coding copilots and meeting notes. Both have short adoption curves and clear time savings.

Which is the riskiest? Internal chatbots and custom agents β€” they take 6–12 months to get right and adoption is fragile.

How do I defend ROI math to a skeptical CFO? Show the measurement plan and the kill criteria. Hand-waving ROI loses the conversation; operational rigor wins it.

What is a reasonable amortization period for setup costs? 24 months for tooling, 36 months for custom builds. Longer if the build is foundational infrastructure.

Should I include saved slack as realized ROI? Only partially. Count it at 30–50% face value unless you have a specific plan to convert it (hiring freeze, growth reinvestment).

How often should I revisit ROI? Quarterly for the first year, annually after that. Market conditions and model capabilities shift fast.

What is the right comparison benchmark?Other AI investments with measured ROI (copilot, meeting notes, chatbot), not non-AI investments. The relevant question is "which AI tool gives us the most per dollar".

Keep going

More free tools