The Quiet Replacement: How AI Could Reshape Human Agency
The Importance of the Mechanics of AI-Driven Societal Change
Recent developments in AI are revealing a concerning disconnect: the gap between how companies are planning to implement AI automation and the public's understanding of its potential societal impact. Two recent publications help illuminate this divide, offering complementary perspectives on how AI agents might reshape the workforce and broader society.
The first is a blog from a member of the Public Policy team at Deepmind (Google) posted on the AI Policy Perspectives Substack called “An agents economy”, which speculates how AI agents could be integrated into the workforce, potentially leading to leaner, more efficient organizations where agents progressively replace human roles by learning tacit knowledge and streamlining, and actually suggests that human involvement could be counterproductive to peak efficiency.
The second is a paper titled “Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development” which argues that incremental AI progress will gradually push out humans for competitive reasons, rather than the typical portrayal of a seminal event that asserts AI dominance over humans.
These two perspectives are particularly compelling because they represent different yet complementary views of how AI automation might reshape our economy and society. “An agents economy” offers an insider's view of how tech companies are thinking about AI integration - pragmatic, focused on organizational efficiency, and notably candid about the potential diminishing role of human workers. It references the need for societal impact understanding and points to discussion of that in a future post, but its focus on the very pragmatic interaction between humans and AI is illuminating.
The academic paper, in contrast, steps back to examine the broader systemic implications of these business decisions. While companies like DeepMind focus on organizational optimization, the researchers highlight how these individual rational choices could collectively lead to unintended consequences for human agency in our economic and social systems.
Together, these pieces offer something rare in current AI discussions: a detailed examination of the mechanics of AI adoption rather than just speculative end states. By understanding both the business logic driving automation and its potential systemic effects, we can better appreciate the gap between corporate automation strategies and their broader societal implications.
Agents will just Automate the Boring Work … Right?
Probably not!
Both papers articulate how agents and AI in general are likely to improve in the near and intermediate future, and begin to exceed human capabilities in many areas. “An agents economy” suggests that at first it will be very difficult for agents to fully grasp some of the softer elements of the workplace, like politics, communication styles, and preferences, and therefore initial human and AI collaboration will be necessary. But this could be just a temporary state, the article speculates that:
”Over time, as agents take on more and interact primarily with each other, the know-how derived from human quirks will lose its value. Organisations will adapt to these changes by restructuring to better align with agents’ needs. Rather than accommodating Sally in Finance’s arbitrary preferences, it will become more economical to replace the role entirely with an agent.”
To me this dichotomy seems very similar to a lot of arguments around self-driving cars. Current technological capabilities dictate that humans remain involved in the driving process, as full self-driving is still prone to mistakes. While self-driving is already superior in many but not all scenarios, human drivers still dominate today. This parallel is particularly striking when we consider infrastructure. Just as roads may eventually be optimized for autonomous vehicles rather than human drivers, organizational systems and processes will likely be redesigned around AI agents rather than human workers. In both cases, what starts as adaptation to human patterns may end in systems that actively discourage human participation.
It’s a common trope that when AI automates tasks humans will just shift to something else “more strategic.” The desired intent of modern AI systems to be able to replicate and learn from human inputs and feedback will continue to shrink the available “more strategic” options. “An agent economy” addresses this by describing a scenario where middle management and potentially entry level employees become unnecessary because agents can execute strategy dictated by a director-level employee:
“But then how useful is the intermediary human really? Under the above proposed model, the director delegates tasks directly to the agents, bypassing the need for human intermediaries. If taste, curation, and tacit knowledge are no longer where humans outperform agents, the rationale for keeping these intermediary roles diminishes. Delegating directly to agents (who benefit from sufficient context and capabilities) creates a cleaner, more efficient chain of command and reduces the risk of misalignment. I expect these restructurings will gradually decrease the involvement of human employees over time.”
Not everyone can be a director or founder of a company, so what is left for everyone else?
The Quiet Erosion of Human Agency
I tend to hyper-focus on the economic impacts of AI, but "Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development" shows how more capable AI could drive disempowerment across the full spectrum, from the economy to culture to governance. This disempowerment stems from a decreasing reliance on human cognition and labor, creating unintended and compounding consequences. While I'll focus on the economic pieces below, the paper's broader analysis of societal impact is worth examining for those interested.
The obvious question arises: why don't we just stop this from happening? The answer lies in our incentive structures, which inexorably shift us toward automation, not away from it. Adding to the points in the prior section, this paper argues that speed could be a driving factor given a certain level of capabilities, and as more investment goes towards AI, there is less incentive to invest in people both from a business perspective, and governing perspective.
“Companies that maintain strict human oversight would likely find themselves at a significant competitive disadvantage compared to those willing to cede substantial control to AI systems, potentially to the point of becoming uncompetitive….As tasks become candidates for future automation, both firms and individuals face diminishing incentives to invest in developing human capabilities in these areas. Instead, they are incentivized to direct resources toward AI development and deployment, accelerating the shift away from human capital formation even before automation is fully realized….The loss of tax revenue from citizens would make the state less reliant on nurturing human capital and fostering environments conducive to human innovation and productivity, and more reliant on AI systems and the profits they generate.”
Often alignment is referenced as the ultimate goal of AGI, where AI is acting purely in the best interests of humanity. However, this paper makes a crucial observation: our current societal infrastructure already produces misaligned outcomes, such as companies pushing against regulation for their own profits or governments leveraging their resources to disempower citizens. Even an "aligned" AI-dominated economy and governance could produce outcomes that fail to serve society's best interests simply because the underlying system itself is imperfect. Particularly concerning is how people at the top could leverage AI to automate previously human-performed tasks, potentially accelerating income inequality to an unrecoverable degree.
Even though none of what has been discussed on the surface appears to be a cataclysmic event, the combination of all these factors over time could lead to an existential crises in the form of a complete lack of human influence either way.
Why does all of this matter?
"Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development" closes with a call to action that resonates with me:
"Humanity's future may depend not only on whether we can prevent AI systems from pursuing overtly hostile goals, but also on whether we can ensure that the evolution of our fundamental societal systems remains meaningfully guided by human values and preferences. This is both a technical challenge and a broader civilizational one, requiring us to think carefully about what it means for humans to retain genuine influence in an increasingly automated world.”
There's a common psychological barrier when confronting systemic change: the belief in personal exception. We tend to think our roles are too nuanced, our skills too specialized, or our positions too essential to be affected by automation. This selective optimism obscures that in a system driven by efficiency and optimization with AI moving up the cognitive chain, traditional notions of job security become increasingly fragile.
The challenge we face isn't simply about preventing job displacement – it's about reforming our fundamental incentive structures. While the drive toward automation and efficiency gains is rational from a business perspective, we lack corresponding mechanisms to ensure these advances translate into broader societal benefits. The pragmatic efficiency described in "An agents economy" might well be inevitable, but without deliberate intervention, it could optimize for metrics that no longer reflect human flourishing.
This brings us to a profound question that demands immediate attention: we have built our entire societal framework on the foundation of human labor and cognition. What happens to this framework when neither remains the primary driver of economic value? The answer will shape not just our economic future, but the very nature of human agency in an AI-augmented world.

