AI Economy Hub

Job-at-risk analyzer

Score a role task-by-task against current LLM and agent capabilities — not a vague risk percentage.

Loading tool…

Get weekly marketing insights

Join 1,200+ readers. One email per week. Unsubscribe anytime.

Frequently asked questions

1.How accurate is the capability score?

It's an engineering judgment informed by benchmarks (MMLU-Pro, SWE-bench, tau-bench) plus production experience. Treat it as a starting point — re-score for your specific task with a 20-prompt test if the decision matters.

2.Why task-level instead of role-level?

Jobs are bundles of tasks. AI eats tasks, not titles. A 60%-exposed role still has 40% of cost in tasks AI can't do — that's where the job moves toward, not away.

3.How fast is AI capability rising?

Our running estimate from benchmark deltas: 10-15 percentage points per 18 months on most task categories. Plan for the trajectory, not the snapshot.

4.What roles held up in 2025-2026?

Roles heavy in negotiation, novel research, stakeholder management, physical tasks, and regulated accountability. Roles heavy in drafting, summarizing, and structured extraction are exposed.

5.Is there a role that went from safe to exposed recently?

Tier-1 support, routine legal research, and first-draft copywriting all moved from 'mostly safe' to 'mostly automated' in 2024-2025. Roles adjacent to those should plan actively.

Job automation risk — task by task, not role by role

"Will AI take my job?" is the wrong question. The right one is: "Which of my tasks will AI do better or cheaper in the next 18 months?" Jobs are bundles of tasks. AI eats tasks, not titles. This analyzer scores a role task-by-task against current LLM and agent capability, and tells you which tasks are at the frontier and which are safe.

How to score AI capability honestly

The capability score (0-100) answers: on this task, with a well-designed AI system today, what fraction of the work can a model do at or above an average human performer's quality?

  • 90-100%: AI is at or above average human on benchmarks and in production. Drafting routine documents, summarizing meetings, routine data entry.
  • 70-89%: AI handles most cases well; human review still catches 10-30% errors. Tier-1 support triage, code review on routine PRs, first-draft copywriting.
  • 40-69%: AI is a useful assistant but can't replace the human judgment. Legal contract review, financial analysis, complex diagnosis.
  • 0-39%: AI is unreliable. Stakeholder management, negotiation, novel research, tasks requiring physical presence or deep tacit knowledge.

The automation exposure score

The tool weights each task's AI capability by the human cost it currently consumes. That gives you a role-level exposure score: not "what % of my job will AI do" but "what % of my cost is exposed to AI." These are different numbers — and the second is the one that matters for career planning.

Example: a product manager role spends 40% of cost on status updates (95% AI capability), 20% on customer interviews (30% AI capability), and 40% on stakeholder alignment (25% AI capability). Role exposure = 0.4×95 + 0.2×30 + 0.4×25 = 54%. More than half the cost of the role is exposed; pivoting the job toward the lower-capability tasks is rational.

Tasks that held up in 2025-2026

  • Negotiation, especially with informed counterparts.
  • Novel research that requires forming hypotheses, not summarizing existing ones.
  • Physical tasks (construction trades, nursing, skilled labor).
  • Stakeholder and relationship management.
  • Tasks requiring accountable ownership under regulation (surgery, audit sign-off, legal counsel of record).

Tasks AI now does at or above average-human quality

  • First-draft status reports, memos, blog posts.
  • Summarizing meetings, documents, codebases.
  • Routine code review + generation.
  • Tier-1 support triage + routing.
  • Structured data entry from documents.
  • Translation between major language pairs.
  • Literature review + citation retrieval.

What to do with the score

  1. Identify 2-3 high-capability tasks you currently spend time on. Automate them — add hours back to your week.
  2. Identify the low-capability tasks that define your role's value. Invest in those skills.
  3. If role exposure is > 60%, plan a pivot inside the next 12 months — pick a role adjacent to yours with lower exposure and build the skill bridge.
  4. Re-score every 6 months. AI capability is a moving target.
Keep going

More free tools