Is your job safe from AI?
Enter your job title to see how automation and AI are impacting your role based on real task-level data.
Check your AI risk score
Why AI risk benchmarking matters
AI is reshaping work task by task, not in a single overnight wave. Some roles already show high observed exposure to AI tools, while others remain largely untouched.
We benchmark each occupation on two dimensions: what AI is actually doing in the wild today (Observed Exposure) and what current-generation LLMs could theoretically do if fully deployed (Theoretical Coverage).
Observed exposure
Our engine flags 'Safe Harbor' tasks—high-stakes decisions, in-person and physical work, sensitive judgment and ethics—that current AI struggles to replace end-to-end.
Why it matters
These tasks point to safer career directions and responsibilities you can proactively grow into.
Theoretical coverage
Estimates how much of your role could, in principle, be handled by current large language models, regardless of whether organizations have adopted them yet.
Why it matters
Most jobs have far higher theoretical coverage than observed exposure, which is why early movers see outsized gains—and why some roles may change rapidly once adoption barriers fall.
Skills-based analysis
We break your role into O*NET task statements and core skills, then score each task for linguistic, logical, and creative automation potential using current LLM benchmarks.
Why it matters
This reveals which parts of your job are most exposed and which skills you should double down on or upskill into.
Safe harbor identification
Our engine flags 'Safe Harbor' tasks—high-stakes decisions, in-person and physical work, sensitive judgment and ethics—that current AI struggles to replace end-to-end.
Why it matters
These tasks point to safer career directions and responsibilities you can proactively grow into.
Data sources & research backbone
Research reference
Our estimates of Theoretical Coverage draw on recent generative AI labor impact research (2023-2024) and LLM task-overlap benchmarks, focusing on how model capabilities map onto real O*NET tasks.
Research reference
Our estimates of Theoretical Coverage draw on recent generative AI labor impact research (2023-2024) and LLM task-overlap benchmarks, focusing on how model capabilities map onto real O*NET tasks.
JobMentis combines this public data with proprietary modeling to produce occupation-level AI risk scores that are interpretable and actionable for individual workers.
Understanding AI transformation
Q.How can I use this report?
You should use these insights to identify your 'Safe Harbor' skills—the parts of your job AI cannot easily replicate. Focus your professional development on these areas and learn to orchestrate AI tools to handle the high-exposure parts of your workflow, effectively increasing your personal leverage.
Q.Does a high AI risk score mean I will lose my job?
Not necessarily. A high score indicates that a large percentage of your current tasks are technically automatable. In most cases, this leads to 'task redistribution'—where AI handles routine outputs while you pivot toward high-value oversight, strategy, and human-centric judgment. The goal is augmentation, not just substitution.
Q.Which jobs are most exposed to AI?
Generally, roles involving high volumes of linguistic processing, structured data analysis, and digital coordination (such as STEM, Management, and Legal) show higher exposure. Roles requiring physical dexterity, in-person emotional labor, or complex environmental navigation (like Construction, Healthcare, and Skilled Trades) currently remain more resilient.
Q.How can I use this report?
You should use these insights to identify your 'Safe Harbor' skills—the parts of your job AI cannot easily replicate. Focus your professional development on these areas and learn to orchestrate AI tools to handle the high-exposure parts of your workflow, effectively increasing your personal leverage.