A people-first vision for the future of work in the age of AI | Brookings
Authors discuss This Brookings article authors explore how to reimagine work in the age of AI to reverse its degradation and protect the role of people in the workplace. Connect with USSFP to discuss how to align technology and workforce strategy.
What does a people-first future of work in the age of AI look like?
The authors argue that AI does not have to mean mass layoffs and worse jobs. Instead, they outline a people-first vision that uses the gains from AI to invest back into society and human-centered work, rather than just into corporate profits.
Today, many workers experience what the authors call the “enshittification” of work: stagnant wages, increased monitoring and surveillance, algorithmic scheduling, and declining autonomy. This is happening across sectors—from teachers and nurses to software engineers—and it started long before modern AI. AI risks accelerating these trends, but it is not the root cause.
A people-first future of work would:
- Protect and expand human roles in relationship-based jobs like teaching, health care, and social work, rather than replacing them with AI systems or robots.
- Improve job quality through policies such as minimum staffing levels, better pay, and restored professional autonomy.
- Use AI as a tool that workers control—for example, to handle administrative tasks like notetaking—so they can spend more time on direct human care and interaction.
- Rebuild institutions (like a revitalized National Labor Relations Board) so workers have real bargaining power and protections that are enforced in practice, not just on paper.
In this vision, AI helps reimagine how we structure work: smaller K–12 class sizes, unhurried nurses, dignified elder care, and accessible mental health support—delivered by well-trained, fairly paid people, funded in part by the productivity gains of the AI economy.
How can policy protect human-centered jobs like teaching and care work from being hollowed out by AI?
The authors emphasize that some jobs should remain fundamentally human—especially those that build relationships and support social well-being, like teaching and care work. They propose several policy directions to protect and strengthen these roles as AI spreads:
- Set and enforce minimum staffing levels
Care professions often operate with overcrowded classrooms, hospital closures, and oversized caseloads. The authors argue for laws that mandate minimum staffing requirements in schools, hospitals, and other care settings, similar to existing rules in:
- Air traffic control and nuclear power plants, where minimum staffing is already required.
- Child care and K–3 classrooms in many states, which already cap the number of students per teacher.
Extending these kinds of requirements would both improve service quality and create more stable, meaningful jobs.
- Invest in more teachers and smaller class sizes
Small teacher-to-student ratios are a key driver of student success and a major differentiator for many private schools. Yet public schools remain chronically underfunded, with too many students per teacher. AI did not create this problem, but it can make it worse.
The authors highlight research showing that students in AI-enabled classrooms felt less connected to their teachers and peers. Given how important teacher–student relationships are for outcomes, they argue that AI should support teachers, not replace them. That means:
- Funding to train and hire more teachers.
- Legal requirements that keep class sizes manageable.
- Use AI to reduce administrative burden, not human contact
In health care, for example, some systems already use AI to assist with notetaking, which can free nurses to spend more time with patients. The authors see this as the right direction: AI should be deployed in ways that workers control and that clearly benefit both workers and the people they serve.
- Strengthen enforcement and worker voice
Policies on paper are not enough. The authors call for:
- A revitalized National Labor Relations Board (NLRB) that can effectively support workers in negotiating terms and conditions of employment.
- Potential amendments to the Fair Labor Standards Act to include minimum staffing levels in some industries, enforced by the Department of Labor’s Wage and Hour Division—possibly with AI used as a force multiplier for oversight.
Overall, the policy message is clear: rather than using AI to cut human roles in care and education, we should use it to rethink how these professions are staffed, trained, and supported, so that people—not algorithms—remain at the center of critical services.
How can workers adapt to AI-driven change without being left behind?
The authors argue that addressing AI-driven labor disruption requires more than generic “reskilling” programs. It calls for new and revitalized institutions, better training pathways, and giving workers a real voice in how AI is designed and deployed.
1. Build serious mid-career training and transition pathways
The U.S. has historically focused on training either before employment (in schools) or on the job. Neither model works well for mid-career workers whose roles are being reshaped by AI.
The authors suggest:
- Creating post-employment training institutions that help workers transition into new fields.
- Using public funding and targeted programs to move workers into high-need, people-first roles. For example, many software engineers threatened by AI already have the math and science background needed for teaching, where there are persistent shortages.
But training alone is not enough. Engineers often chose tech over teaching because of income, prestige, and autonomy. To make these transitions viable, policymakers need to:
- Raise salaries in people-first professions.
- Guarantee adequate staffing levels so workloads are sustainable.
- Restore professional autonomy, so teachers and care workers are trusted as experts, not micromanaged by algorithms.
2. Reinvigorate worker representation and lifelong learning
The authors point to European models where powerful trade unions play a central role in lifelong learning. By aggregating worker interests and negotiating with employers, unions help keep workers secure and economies competitive.
They recommend:
- Rebuilding regulatory and collective bargaining structures in the U.S. so workers can shape training and deployment of AI.
- Using AI as a tool to help understaffed enforcement agencies (like the NLRB or Department of Labor) monitor compliance and protect workers’ rights.
3. Create tripartite institutions to co-design AI
The authors advocate for tripartite institutions that bring together government, business, and labor unions to:
- Identify jobs that need minimum staffing protections.
- Co-design AI systems with the people who actually use them.
They note that when AI is introduced “from above,” it often degrades working conditions. For example, utility workers at a Cleveland convening described client management software that sent them on inefficient and unsafe routes because it wasn’t designed for their real-world needs.
By contrast, they highlight a collaboration between Carnegie Mellon University computer scientists and the UNITE HERE union, which co-designed an app for hotel guest room attendants. The app improves communication about issues like missing supplies and reduces labor–management conflict—a practical example of how co-designed tools can create mutual gains.
4. Treat worker input on AI as real work
Current federal rules require consultation with affected communities for high-impact AI systems, but often after deployment. The authors argue that effective participatory design means:
- Involving workers before systems are rolled out.
- Compensating workers for their design input.
- Balancing local, use-case-specific design with the scale of modern AI systems.
In short, helping workers adapt to AI is not just about teaching new skills. It is about reimagining training, strengthening worker voice, and embedding workers directly into the design and governance of AI, so that technology reshapes work in ways that support people rather than sidelining them.


