The old frameworks for automation are broken
We've spent decades studying how automation displaces workers. The model that emerged, developed by Acemoglu and Restrepo, was elegant and predictive: break down each occupation into discrete tasks, assess which tasks AI could automate, and estimate how many workers would be affected. This worked for manufacturing. It worked for earlier waves of software automation. Factories could be retooled to handle new production methods. Industries could absorb the transition across decades.
But something fundamentally different is happening now, and our measurement tools can't see it. Agentic AI systems don't automate individual tasks. They orchestrate entire occupational workflows from beginning to end, making dozens of interconnected decisions along the way. A judge uses hundreds of different skills across their work. No single AI system needs to be superhuman at reading, reasoning, legal analysis, or writing individually. But string those capabilities together into an autonomous workflow that reads a case file, researches precedents, identifies key issues, drafts an opinion, and flags edge cases for review, all without stopping for human judgment at intermediate steps? Now you've potentially displaced an entire occupation. The old frameworks look at each skill separately and miss how dangerous the combination becomes.
The paper addresses this head-on. Rather than trying to patch existing models, the researchers built a new measurement system designed specifically for workflow-level disruption. They introduce the Agentic Task Exposure (ATE) score, a composite measure that captures something the task-based framework structurally cannot: the risk that a single AI system executing a coherent occupational workflow will eliminate the need for human workers in that job entirely.
Why workflow matters differently than tasks
The conceptual shift here is subtle but crucial. When a specific task becomes automatable, a skilled human often remains necessary to orchestrate the work, apply judgment, and handle edge cases. But when an agentic system can chain tasks together autonomously, it removes the need for that orchestrator entirely.
Consider a credit analyst. The old framework would analyze whether AI can do financial modeling (yes), evaluate credit risk (mostly), and write recommendations (increasingly). It would likely conclude that analysts are partially displaced but remain valuable for oversight. But if one system can ingest applications, run all the models, evaluate comparables, identify red flags, and draft approval or rejection with reasoning, the analyst becomes redundant rather than complementary.
The ATE score captures this by combining three components. First, an AI capability score measures what proportion of tasks within an occupation current AI systems can execute competently, drawn from O*NET task data. Second, a workflow coverage factor assesses whether the automatable tasks form a coherent, orchestrable sequence. This is where the framework diverges from predecessors. An occupation where automatable tasks are scattered and deeply interdependent scores lower than one where they're sequential and modular. Third, a logistic adoption velocity parameter models how quickly organizations will actually deploy agentic systems, acknowledging that technical capability and economic deployment operate on different timelines.
The researchers synthesized these into a single mechanistic model rather than a regression. This choice matters philosophically. They're not saying "historical patterns suggest," but rather "here's how agentic systems will actually function as they execute work." They calibrated the framework by identifying occupations where displacement risk seems obvious (credit analysts: extremely high) and reverse-engineered the parameters, then applied them systematically across all occupations. This makes the model transparent and auditable in ways black-box prediction struggles to achieve.
Disruption isn't evenly distributed
The geography of vulnerability emerges sharply from the analysis. The researchers focused on five major US technology regions: Seattle-Tacoma, San Francisco Bay Area, Austin, New York, and Boston. These aren't the only places where agentic AI will matter, but they're the densest concentrations of occupations matching the profile for displacement.
The findings are striking: 93.2% of the 236 analyzed occupations across six information-intensive SOC categories (financial, legal, healthcare, healthcare support, sales, administrative and clerical roles) cross the moderate-risk threshold (ATE score of 0.35 or higher) in Tier 1 regions by 2030. In plain terms: in San Francisco by 2030, nearly every job in these categories faces significant disruption probability.
The most vulnerable occupations cluster around document-in-document-out workflows with heavy reasoning components. Credit analysts, judges, and sustainability specialists all reach ATE scores of 0.43 to 0.47, indicating very high displacement risk. Judges particularly stand out because conventional analysis might assume the role requires too much judgment and contextual understanding for automation. But the workflow is surprisingly orchestrable: intake the case details, research precedent, identify legal principles, draft an opinion with reasoning, flag potential appeals or constitutional issues. An agentic system could execute this sequence end-to-end.
The paper connects to broader work on territorial impacts of AI displacement, showing that disruption isn't just sectoral but geographic. A credit analyst in San Francisco faces fundamentally different labor market conditions than a credit analyst in rural Mississippi, not because of different skills but because of different regional economic structure. Tech hubs aren't just "ahead" on this timeline, they're in a different labor market category entirely.
This geographic concentration matters for policy. It means disruption will be severe and visible in specific places, creating acute pressure for response, while other regions might see relatively little change. It means migration patterns will shift, potentially stranding workers and infrastructure in regions that lose employment concentration.
The jobs that might actually emerge
But the story isn't one-directional. The research identifies seventeen emerging occupational categories that might expand as agentic AI rolls out, concentrated in three areas: human-AI collaboration roles, AI governance and auditing, and domain-specific AI operations. These don't yet appear in the standard occupational taxonomy, but they're nascent roles that become necessary precisely because agentic systems need to be managed, governed, and specialized.
This is where the concept of reinstatement effects matters. When factories started using electricity, blacksmith employment fell while electrician employment exploded. The research isn't claiming overall job gains, but rather that some occupations expand even as others shrink. If agentic AI systems draft legal documents, you need fewer junior attorneys doing document review. But you need more senior attorneys specializing in reviewing AI-generated work, ensuring compliance, and managing AI system behavior. The job mix shifts drastically even if total employment doesn't.
The emerging roles include positions like AI Legal Compliance Specialist, Autonomous System Supervisor, and AI Training Data Manager. These are different skills from the displaced positions. You can't simply retrain a credit analyst to become an AI Supervision Specialist through a few weeks of coursework. The implication is uncomfortable: some occupations might benefit from retraining pipelines, others require workers to change careers entirely. Geographic mismatch compounds the problem. The emerging occupations might concentrate in tech hubs while displaced workers sit in cities that can't absorb them into new roles.
The identification of these emerging roles has a second implication: the research isn't dismissing labor disruption with "there will be new jobs." It's arguing that labor market adjustment is more complex than simple replacement and requires intentional policy response.
Why the 2025-2030 timeline matters
All the findings in this research depend on adoption speed. "AI will eventually displace workers" is a weak claim. "AI will displace 40% of a region's professional workforce within five years" is a different kind of prediction entirely, with implications for workforce planning, policy, and individual career decisions.
The paper models adoption through a logistic curve: slow initially as early adopters test systems and work out integration problems, exponential in the middle as competitive pressure forces deployment, plateauing as most organizations have adopted. The question is where agentic AI sits on that curve. The authors assume we're in early stages as of mid-2024, which puts the exponential growth phase in the 2025-2030 window.
Several factors could accelerate adoption. Extreme financial pressure to cut labor costs reduces uncertainty and pushes earlier deployment. If regulatory bodies greenlight agentic AI in financial or legal domains, deployment accelerates dramatically. Competitive pressure is powerful: once one major law firm uses agentic systems at scale, others feel forced to follow or lose competitive advantage. Improvements in how agentic systems integrate with existing human workflows lower friction.
Other factors could slow adoption. Risk aversion in heavily regulated industries (judges, financial analysts) might lead to cautious, gradual deployment despite technical capability. Litigation and liability concerns could create hesitation. If an AI-generated legal brief causes damages in litigation, organizations become gun-shy about deployment. Labor movement pushback could delay rollout. Technical limitations might prove more severe than expected, slowing adoption below the logistic curve's projection.
The specific timeline should be debated. But the underlying mechanics, the paper suggests, are probably correct: agentic systems can execute occupational workflows, and once that's clearly demonstrated, competitive and financial incentives will push deployment relatively quickly.
What changes in policy, cities, and individual planning
The entire point of research like this is enabling better decisions. If displacement is concentrated geographically and on a faster timeline than previous automation waves, that changes what makes sense to do.
For regional economic planners in places like Austin, San Francisco, or Boston, the research makes visible where they need to invest. Not just retraining, though that's part of it, but real economic diversification. If 93% of financial and legal occupations in a region face high displacement risk within five years, that's a stark signal that the regional economy needs to develop alternative employment paths. The seventeen emerging occupational categories are seeds for that development. Regions could invest in education, infrastructure, and hiring incentives for AI governance roles before displacement hits.
For workforce transition policy, the timing creates urgency. Previous automation happened over 20 to 40 years, which was brutal for affected workers but allowed gradual reallocation. This research suggests disruption might compress into 5 to 15 years in concentrated regions. That difference transforms what policy responses are adequate. Wage insurance, accelerated retraining, relocation support, and income floor protections shift from nice-to-have to necessary for preventing social rupture. The workers most affected—credit analysts, paralegals, administrative staff—often have fewer resources to retrain themselves.
For workers in high-ATE occupations, the research argues for starting transition planning now rather than waiting. Not panicking, but beginning to understand whether retraining into emerging roles is possible, whether migration might be necessary, or whether a different career trajectory makes sense.
For technologists and AI governance advocates, the research makes clear that agentic AI capabilities exist or exist imminently. It shifts the question from "will agentic AI displace workers?" to "at what pace and under what constraints?" That implies a governance agenda: if displacement is likely anyway, what policies and guardrails minimize disruption and maximize opportunities for workers transitioning into new roles?
The research doesn't resolve the values questions. Whether agentic AI should be deployed to automate these occupations is a choice society makes. Whether the transition can be managed smoothly depends on policy choices not yet made. What the research does is make the costs and timing visible, which is necessary though not sufficient for good policy.
How the measurement choice shapes the findings
The decision to build a mechanistic model rather than a statistical regression shapes what the research can say. A regression would fit historical data about which jobs automation actually displaced and project forward. That approach has the advantage of empirical grounding but the disadvantage of being trained on a fundamentally different kind of automation (task-level, physical, deployed gradually across decades). The historical precedent provides weak guidance for agentic systems.
Instead, the researchers asked: how do agentic systems actually function? What does it mean for a workflow to be orchestrable by a single AI agent? How quickly will organizations adopt? They encoded those mechanistic questions into parameter settings, calibrated against judgment about clear-cut cases, and applied the framework systematically.
This approach is more philosophically honest about the limits of prediction. It doesn't pretend historical data can fully guide us on something genuinely new. It makes its assumptions explicit and subject to scrutiny rather than hidden in regression coefficients. It's also more actionable: if you disagree with the adoption velocity assumption, or the workflow coverage assessment for a particular occupation, you can see exactly where and modify it. Black-box models don't allow that transparency.
The work also connects to recent research on agentic manipulation in employment, which examines how autonomous systems might reshape working conditions even before large-scale displacement. And it extends earlier frameworks on job exposure assessment, showing how measurement methodology influences what risks become visible.
What remains uncertain
The paper is careful about what it claims and what it leaves open. It predicts displacement risk based on technical capability and workflow analyzability. It doesn't predict whether organizations will actually deploy agentic systems, only that financial incentives and competitive pressure are likely to push adoption. It doesn't resolve what policy responses are adequate, only that current policy frameworks seem designed for a different kind of disruption.
The strongest claim the research makes is structural: agentic AI can execute entire occupational workflows, which means displacement risk is substantially larger than task-level analysis suggests, and that risk is concentrated geographically and temporally. The timeline, the specific occupations most affected, the pace of adoption, and the adequacy of transition support are all more uncertain.
But that structural claim alone changes how the problem should be framed. If agentic AI genuinely does enable workflow-level automation, then the labor market disruption ahead isn't a tail risk or an academic curiosity. It's something major cities should be planning for now. It's something workers should factor into career decisions. It's something policymakers need to address with more urgency than current frameworks suggest. The ATE framework makes that risk visible. What society does with the visibility is the next question.
This is a Plain English Papers summary of a research paper called Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis of Emerging Labor Market Disruption. If you like these kinds of analysis, join AIModels.fyi or follow us on Twitter.
