top of page

AI: A Tool for Human Augmentation — Not End-to-End Automation

  • Writer: Oscar Gonzalez
    Oscar Gonzalez
  • Nov 12
  • 4 min read

In a recent study from the Stanford University SALT Lab (Shao et al., 2025) titled “Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce”, the authors offer a thoughtful, data-driven assessment of how AI agents map to real human work. In this post, I summarise the paper's key findings—and situate them within my own AI philosophy and approach at Accéder, where I firmly believe that intelligent machines should augment human professionals rather than supplant them.


Key findings from the Stanford study


Here are the most relevant insights:


  • The researchers surveyed 1,500 workers across 104 occupations, identifying 844 distinct tasks, and asked how much human agency workers preferred when AI agents are used. Simultaneously, they gathered assessments from AI experts on how capable current AI systems are at performing those tasks.


  • They introduced a Human Agency Scale (HAS) to quantify preferred human involvement, rather than a binary automate vs. not-automate decision.


  • They mapped tasks into a 2×2 framework: worker desire for AI + AI capability into four “zones”:

    • Green Light (high desire / high capability) — prime candidates for AI assistance

    • Red Light (low desire / high capability) — technically feasible, but human workers resist it

    • R&D Opportunity (high desire / low capability) — workers want i,t but tech isn’t ready

    • Low Priority (low desire / low capability) — neither wanted nor feasible.


  • A dominant preference among workers was for a collaborative human-AI partnership (HAS middle levels) rather than full automation.


  • They found a significant misalignment: many tasks that are technically feasible for AI are not tasks workers want automated; conversely, many tasks workers do want help with are not yet possible.


  • The study also points to a shift in which human skills will grow in value: tasks tied to interpersonal skills, judgement, coaching, and coordination will become more critical, while purely information-processing or routine analytic skills may decline in relative value.


How this aligns with — and reinforces — the Accéder / TITAN AI philosophy


At Accéder, with our flagship platform TITAN, our guiding principle is that AI should serve as an intelligent assistant to human workers—not as a fully autonomous decision-maker. The Stanford findings validate and sharpen this approach in several ways:


  1. Human in the Loop Matters. The Stanford HAS shows that workers prefer to retain oversight, judgment and control. This underscores the business importance of designing AI solutions that keep humans accountable and engaged. For TITAN, we build architectures that emphasize human-agent collaboration rather than full automation.


  1. Augmentation over Replacement. The “Green Light” zone in the Stanford study corresponds precisely to the kind of use cases we target: structured, repetitive, high-volume tasks (for example, in supply chain, finance, customer care) that free humans to focus on higher-value-added judgment, strategic interactions, and creativity. By contrast, pushing for full automation in tasks humans value (which aligns with the “Red Light” zone) risks resistance and lower adoption.


  1. Prioritize Where Workers Want It and Tech Can Deliver. The misalignment identified by Stanford is a red flag for many AI projects: if you automate tasks that workers don’t want, you may see low uptake, distrust or unintended consequences. At Accéder, we emphasize a “use-case audit” where we evaluate: (a) does the worker want assistance? (b) Can the AI deliver reliably? This mirrors the study’s framework.


  1. Shifting Skills and Value Creation. The study’s insight that interpersonal and judgment skills will become more critical dovetails with our human-centric deployment strategy. TITAN isn’t built to eliminate human roles, but to elevate them: we free operations professionals from data entry and basic forecasting so they can focus on scenario-building, strategic supply chain decisions, and change management — where human expertise remains dominant.


  1. Ethics, Trust and Adoption. Stanford’s findings that trust and human agency matter remind us that deploying AI is not just a technical problem but an organizational change challenge. Users must feel empowered, not overridden. Hence, at Accéder, we embed transparency, human-agent hand-off protocols, audit logs and human-override options in TITAN.



Practical implications for business executives

For business execs, especially in manufacturing, consumer goods, supply chain, and finance, here are actionable takeaways based on both the Stanford research and our approach at Accéder:


  • Start with tasks in the Green Light zone: Identify tasks where workers welcome AI assistance and where the technology can deliver. These are early “wins” that build trust and momentum.


  • Avoid full automation of high-judgement tasks: Resist the temptation of end-to-end black-box AI for core strategic decisions where humans expect to remain in control. These may fall into the Red Light zone.


  • Partnership design: Frame the AI roll-out as a human-AI team. Define which steps the human leads and which the agent supports. Use the HAS scale internally to calibrate human involvement.


  • Invest in change management and upskilling: As value shifts toward interpersonal, judgment, and coordination skills, companies should train workers to use AI-augmented tools effectively, interpret insights, make decisions, and manage exceptions.


  • Monitor adoption, human-agent hand-offs and agency: Track metrics not only of technical performance (accuracy, speed) but also of human satisfaction, trust, oversight actions and error handling. A high-performing system that leaves humans disengaged remains brittle.


  • Focus on strategic value rather than hype: The research warns against over-investing in tasks where AI capability outpaces worker desire or where workers don’t want the automation. That misalignment risks cost, downtime, low adoption and reputational harm.



Final reflections


In the rush toward “transformational AI”, the Stanford SALT Lab study serves as a timely reminder: the future of work is not about machines replacing people — it’s about machines amplifying people. By staying anchored in augmentation, preserving human agency, and aligning AI deployment with worker preferences and real technical capability, we build solutions that are sustainable, trusted, and high-impact.


At Accéder, with TITAN and our multilingual, domain-adaptive LLM agenda, our mission is clear: deliver enterprise-grade AI that empowers the human professional — enabling faster, smarter, data-rich decisions — not replace them. We design for human-in-the-loop, for collaboration, for oversight. Because in our view, the ultimate value of AI lies not in replacing human decision-making, but in enhancing human capability, freeing humans to act where they are best: strategic, creative, empathetic, decisive.


Thank you for reading. I look forward to your thoughts, questions and reflections on how your organization can embrace AI as augmentation, not automation.


References:

Yijia Shao, Humishka Zope, Yucheng Jiang, Jiaxin Pei, David Nguyen, Erik Brynjolfsson & Diyi Yang (2025). Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce. 

2025 Accéder © All Rights Reserved

Designed by alfaROI

bottom of page