Stanford University research says the new AI framework evaluates where AI should automate and enhance efforts.

Redefine work execution with AI agents
AI agents reshape how jobs are performed by providing tools to perform complex, goal-guided tasks. Unlike static algorithms, these agents combine multi-step planning with software tools to handle the entire workflow in various fields, including education, law, finance, and logistics. Their integration is no longer theoretical – workers are already applying them to support a variety of professional responsibilities. The result is a transitional labor environment in which the boundaries between human and machine collaboration are redefined every day.
Blinking the gap between AI capabilities and worker preferences
A lasting problem in this transition is the disconnection between what AI agents can do and what workers want them to do. Even if AI systems are technically able to take over a task, workers may not be unable to support this shift because of concerns about job satisfaction, task complexity, or the importance of human judgment. Meanwhile, tasks that workers are eager to uninstall may lack mature AI solutions. This mismatch gives major obstacles to responsible and effective deployment of AI in the workforce.
Beyond Software Engineers: Overall Workforce Assessment
Until recently, assessments of AI adoption rates have often centered on a few roles, such as software engineering or customer service, limiting understanding of how AI affects wider career diversity. Most of these approaches also prioritize company productivity over worker experience. They rely on an analysis of the current usage pattern that provides no forward-looking view. As a result, the development of AI tools lacks a comprehensive basis based on the actual preferences and needs of people who perform their work.
Stanford University’s Survey-driven Studio Database: Capturing Real Worker Voices
The Stanford University research team introduced a survey-based audit framework that evaluates tasks workers want to see automated or enhanced and compares them with expert evaluations of AI capabilities. Using task data from the U.S. Labor O* network database, the researchers created Workbank, a response based on the response of 1,500 domain workers and the evaluation of 52 AI experts. The team used audio-enabled mini-access to gather subtle preferences. It introduces the Human Agent Scale (HAS), a five-level indicator that captures the degree of expectations that humans can participate in the task completion.

Human Agent Scale (HAS): Measurement of the appropriate AI participation level
The center of this framework is the Human Agent Scale, ranging from H1 (full AI control) to H5 (full human control). This approach recognizes that not all tasks benefit from full automation and should not be the purpose of each AI tool. For example, tasks rated as H1 or H2 (such as transcribing data or generating routine reports) are ideal for independent AI execution. Meanwhile, tasks such as planning training programs or participating in safety-related discussions are often scored at H4 or H5, reflecting the high demand for human supervision. The researchers gathered a dual opinion: workers’ desire for automation was evaluated and preferred for each task to be of level, while experts evaluated the AI’s current capability to that task.
Work library insights: Where workers embrace or resist AI
The results of the Workbank database reveal clear patterns. About 46.1% of tasks are subject to a high desire for the automation of workers, especially those seen as low value or repetitive. Conversely, significant resistance was found in tasks involving creativity or interpersonal dynamics, regardless of the technological capabilities of AI. By covering workers’ preferences and expert capabilities, the task is divided into four areas: the automation “green light” area (high function and high demand), the automation “red light” area (high function but low desire), the R&D opportunity zone (low function but high function but high function) and the low priority zone (low demand and low function and low function). 41% of the tasks are consistent with companies funded by Y portfolios and are in a low priority or red light zone, indicating potential misalignment between startup investment and worker demand.
Responsible AI deployment in the workforce
This study clearly understands how to integrate AI more responsibly. Stanford University not only is technically feasible, but also reveals the location of automation where workers are accepted. Not only does their mission-level framework go beyond technologically ready covering human values, they are therefore an invaluable tool for AI development, labor policy and workforce training strategies.
tl; dr:
This article introduces Workbank, a large-scale dataset combining worker preferences for 844 tasks and 104 occupations and AI expert evaluations to evaluate where AI agents should automate or enhance their work. The study uses the novel Human Agent Scale (HAS) to reveal a complex automation pattern that highlights the misalignment between technical capabilities and workers’ desires. The findings show that workers welcome automation of repetitive tasks, but resist in roles that require creativity or interpersonal skills. This framework provides actionable insights into the deployment of responsible AI deployments that align with human values.
Check Paper. All credits for this study are to the researchers on the project. Also, please stay tuned for us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.

Nikhil is an intern consultant at Marktechpost. He is studying for a comprehensive material degree in integrated materials at the Haragpur Indian Technical College. Nikhil is an AI/ML enthusiast and has been studying applications in fields such as biomaterials and biomedical sciences. He has a strong background in materials science, and he is exploring new advancements and creating opportunities for contribution.
