AI Economy Hub

Job automation risk

Estimate the risk that routine tasks in a role get automated by AI.

Loading calculator…

Get weekly marketing insights

Join 1,200+ readers. One email per week. Unsubscribe anytime.

Frequently asked questions

1.What about new jobs created?

History says automation creates new jobs too — but not always for the same people. Reskilling is the hedge.

Job automation risk in 2026: the honest taxonomy

Three years into the post-ChatGPT era, the data on which jobs AI has actually automated is finally coherent. Not "will be automated someday" — actually shipped, replaced, or materially de-staffed by end of Q1 2026. The pattern is narrower and more task-specific than the "AI will take your job" headlines suggested in 2023–2024. It is also more impactful than the "LLMs are just fancy autocomplete" pushback claimed.

The four-quadrant framework

Any role breaks into some mix of four task types. Automation risk is a weighted sum, not a job title.

QuadrantExample tasksAutomation risk 20262028 outlook
Repetitive structuredData entry, invoice coding, form processingHigh (70-90% automatable)Essentially gone
Expert knowledge applicationLegal research, basic coding, L1 support, first-draft copyModerate (30-50% of task)Task-shifts; role survives
Judgment + stakeholder mgmtSales close, negotiation, PM prioritizationLow (<20%)AI as research assistant
Physical / in-personHealthcare, skilled trades, field salesVery low (<10%)Barely touched

Roles with measurable 2024–2026 displacement

  • Translation (freelance general-purpose): 30–50% market contraction. Specialists unaffected.
  • Content writing (generic SEO/affiliate): 60% contraction. Branded/journalistic writing stable.
  • Customer support L1: 15–35% headcount reduction at large B2C. L2+ unchanged.
  • Basic illustration / stock imagery: 40–60% contraction. Editorial illustration stable.
  • Paralegal document review: 15–25% reduction on doc-review-heavy firms.
  • Junior data analysis: SQL + chart generation automated. Analyst headcount redirected to modeling + stakeholder work.
  • Sales development reps (SDR): Tool-augmented; ~20% headcount reduction at mature sales orgs running Clay/Apollo workflows.

Roles with minimal displacement

  • Software engineering overall (despite copilots) — augmentation, not replacement.
  • Sales account executives (AEs) — closing is judgment-heavy.
  • Registered nurses, physicians, most healthcare.
  • Skilled trades, construction management.
  • Teachers, counselors, therapists.
  • Senior management — if anything, AI extended the span of control.

How this calculator computes risk

The score you see is task_share_structured × 0.85 + task_share_expert × 0.40 + task_share_judgment × 0.15 + task_share_physical × 0.05, adjusted for industry AI adoption rate. Above 55%, you should be actively repositioning toward less-automatable tasks in your role. Between 35–55%, the realistic plan is aggressive tool adoption — become the person using AI fluently, not the person replaced by it. Below 35%, you're mostly fine but should still use AI to extend capacity.

The repositioning playbook for moderate-risk roles

  1. Identify which tasks in your role are in quadrant 1 (structured repetitive). Automate them yourself before management does.
  2. Invest 3–5 hours/week in quadrant 3 skills — stakeholder management, negotiation, domain judgment.
  3. Become fluent with the AI tools your industry uses. "AI-capable" is the new table stakes; "AI-first" is the raise-justifying tier.
  4. If you're in a contracting specialization (general-purpose translator, generic copywriter), have a 12–18 month pivot plan.

Three career scenarios with realistic 2026 trajectories

Scenario 1 — Mid-career copywriter at a mid-market agency. Role was 60% blog posts, 20% email copy, 20% client strategy. Blog posts are the 2023–2026 displacement zone. The worker who repositioned in 2024 — took on client strategy, became the AI-tool lead for the agency, built internal prompt libraries — still has the same title but now does 20% writing, 50% strategy/reviews, 30% internal AI leadership. Same comp, more secure role. The worker who did not reposition is either out or on reduced hours. This is the archetypal moderate-risk trajectory.

Scenario 2 — Senior software engineer (backend). Copilot adoption boosted throughput 15–25%. No displacement; job is easier, not scarcer. The scarce skill is now architectural judgment and debugging complex distributed systems — exactly what AI is worst at. Compensation is flat-to-up; career is stable. The worker who added production-AI experience (shipped an internal RAG tool, optimized LLM inference cost) moved from $185k to $240k in 12 months.

Scenario 3 — L1 customer support rep at a B2C SaaS. 35% of the team was let go in mid-2025 when deflection hit 50%. Remaining reps handle harder tickets at higher pay ($52k → $62k). The path forward for the at-risk rep is L2 upskilling (product specialist, technical troubleshooting) or lateral into customer success. Those who pivoted are secure; those who did not are exposed.

Industry adoption heat map

Not every industry is moving at the same pace. Fast-adopters (tech SaaS, e-commerce, marketing agencies, consulting) are 2–3 years ahead; slow-adopters (manufacturing, utilities, government, regulated healthcare) are still mid-pilot in April 2026. Your automation risk is a function of (a) your role's exposure and (b) your industry's adoption speed. A paralegal at a mid-sized law firm that adopted Harvey and Ironclad has a different near-term trajectory than a paralegal at a boutique litigation shop that has not. Factor industry adoption into any personal planning.

What "AI-first" actually looks like as a competency

The phrase gets used loosely. Operationally, AI-first means: (1) you reach for the AI tool first on any appropriate task, without prompting from your manager; (2) you can evaluate model outputs critically rather than accepting them; (3) you know the cost, latency, and quality tradeoffs of the tools you use; (4) you contribute back — prompts, templates, workflows — to your team. The premium in 2026 is not on "uses AI" (baseline) but on "makes their team better at using AI" (differentiator).

What the data actually says about net job creation

Headlines about AI job loss dominate, but the BLS and private data tell a more mixed story. Tech employment is flat-to-slightly-down from 2023 peaks. Entry-level knowledge work is contracted 8–15% depending on the segment. Middle-career knowledge workers are net up in headcount, though reshaped in role content. AI-adjacent roles (MLE, AI PM, applied AI, AI ops, AI safety) are up 40%+ in postings YoY. Net job creation within AI tooling and adjacent services roughly offsets the displacement in specific segments. The story is not a net job apocalypse; it is a painful redistribution.

Frequently asked questions

Is my job actually at risk, concretely?Do the four-quadrant decomposition on your own role. If you are >50% in quadrant 1 and your industry is a fast-adopter, you have 12–24 months to reposition. If you are <30% in quadrant 1, you have years.

Should I learn to code if I am in a non-technical role? Shallowly, yes. Deep engineering is a multi-year investment and not obviously the best pivot for most mid-career professionals. But understanding how APIs, prompts, and data pipelines work is a 40-hour investment with a durable payback.

Which industries are safest? Healthcare delivery, skilled trades, in-person services, physical therapy, mental health, early-childhood education, elder care. Any role where presence and physical trust are core.

How do I signal AI-fluency to employers? Shipped work beats certs. An internal workflow you built and documented, an OSS contribution to an LLM tool, a public writeup of your AI use at a previous role. Certs are table stakes; proof is the differentiator.

What about generative AI's effect on creative work? Stock imagery, generic illustration, generic copy — severely displaced. Editorial illustration, branded creative, expert writing — stable or up, because clients now value human creative judgment more, not less.

Will AI progress slow down? Probably not in 2026. Inference cost continues dropping; model capabilities continue expanding. Regulatory friction is real but local. Plan as if AI capabilities in 2027 are noticeably above 2026.

What is the single most important skill for a moderate-risk role?Ability to ship. Moving an idea from "we could use AI for this" to "here is the live workflow that saved the team N hours" is the career accelerant. Everything else (tools, certs, buzzwords) is commodity.

Is early retirement an option if I am close? For workers 55+ with savings, yes, and several at-risk-role pros have chosen it in 2025–2026. The calculation is usually: loaded cost of a 3-year reskill vs. accelerated retirement. Case by case.

The AI-economics lens on individual risk

Underneath every automation-risk score is a unit-cost comparison. A task that a human does in 3 minutes at a $60/hr loaded rate costs $3. The same task on Claude Sonnet 4.5 ($3/$15 per million tokens) with 1,400 input tokens and 250 output tokens costs about $0.008. That is a 375× cost differential. With Anthropic prompt caching (90% read discount) on the system prompt portion, it gets closer to 600×. With Haiku 4 routing on the easy cases, 1,500-2,000×. Knowing the specific math for your own role — what a well-prompted LLM costs per task versus what your time costs — is the honest basis for the "am I at risk" conversation.

Model selection rules that tell you which tasks are replaceable

  • Tasks Haiku 4 ($0.80/$4) handles at ≥90% human qualityare fully automatable today. These are the narrow-intent classification jobs: L1 ticket routing, intent detection, sentiment analysis, basic data entry. Roles composed >40% of these tasks are the highest-risk.
  • Tasks Sonnet 4.5 ($3/$15) handles at ≥85% human quality are copilot-replaceable today, task-replaceable within 18 months. First-draft writing, meeting summaries, code reviews of medium complexity, contract-clause extraction.
  • Tasks requiring Opus 4.1 ($15/$75) to match human quality are copilot-augmented today, displacement unlikely before 2028. Deep legal analysis, architecture decisions, complex negotiation strategy.
  • Tasks no model matches yet — physical presence, stakeholder trust, in-person judgment — are not on the near-term displacement curve.
Keep going

More free tools