Over the past few weeks, a couple pieces of writing have dominated the AI discourse. Matt Shumer's "Something Big Is Happening" has been viewed over 80 million times. Citrini Research's "2028 Global Intelligence Crisis" spooked actual markets, contributing to an 800-point Dow drop and a 13% single-day collapse in IBM stock. Meanwhile, Anthropic CEO Dario Amodei warns that AI could eliminate half of all entry-level white-collar jobs within five years, potentially pushing unemployment to 10-20%. Even Sam Altman, while painting a rosier long-term picture, acknowledges serious short-term disruption.
I take all of this seriously. But I think the conversation is stuck on the wrong question. The question everyone is debating is "how many jobs will AI destroy?" The better question is "what happens to the demand for work when the cost of building drops to near zero?"
Those are very different questions. And they lead to very different answers.
Software is the substrate
The food you eat, the car you drive, the retailers you interact with, the entertainment you consume, the way you connect with friends and family. All of it runs on software. Software engineering is not an industry in the way that, say, steel manufacturing is an industry. It is the implementation layer of modern life.
So, when someone says, "AI is going to disrupt software engineering," they are not describing a threat to one profession. They are describing a transformation of the mechanism through which human preferences get translated into reality. Framing it as "will AI take software jobs?" is like asking "will tractors take farming jobs?" while ignoring that tractors changed what humanity could eat, where people could live, and how civilization organized itself.
And this logic extends beyond software. Most people who need a lawyer can't afford one. Most small businesses can't access the financial analysis that large firms take for granted. Most patients can't get the personalized medical attention their conditions warrant. In every domain where professional expertise is expensive, there is a vast backlog of unmet need suppressed by cost. AI changes that math across the board.
That reframe changes everything about what happens next.
The invisible backlog
The doom scenarios are missing that the current volume of work is not the total demand for work. It is the demand that survived a cost filter.
Think about all the things that would make life better but that nobody has built because the economics don't work. The small restaurant that can't afford custom software for managing their particular supply chain quirks. The local music teacher who can't commission an app tailored to how their students actually learn. The family that can't have software reflecting their specific routines. The first-time founder who can't afford a real lawyer to review her contracts.
These aren't hypothetical luxuries. They are real needs that real people have, but the cost of bespoke solutions has always made them invisible. Nobody bothers to articulate demands they know can't be met.
When AI dramatically reduces the cost of building, these demands don't stay invisible. They become visible. They become feasible. And the work to fulfill them does not consolidate into fewer jobs. It fragments and multiplies into radically more work, different work, but more of it.
The anxiety about AI and jobs assumes a fixed pie. But the pie is a function of human desire, and human desire is unbounded.
Prediction is not judgment
There is a reasonable counterargument here, and I want to take it seriously. Someone might say "fine, but AI can also figure out what people want. It can predict demand, design solutions, and build them. The whole chain gets automated."
This confuses two very different things.
AI is getting remarkably good at prediction. It can increasingly identify what someone is likely to click on, likely to buy, likely to engage with. Pattern recognition at scale. But "likely to want" is not the same as "worth building."
Prediction tells you what is probable. Judgment tells you what is worthwhile. And judgment is not a soft skill or a vague abstraction. Deciding what to build and determining whether it worked is the concrete work of specification and evaluation. It is product management, design, strategy, curation, and prioritization. These are real jobs with real labor markets and real demand that grows as the space of what's possible expands.
Every engineer will tell you the hardest part of the job is not writing the code. It is figuring out what the code should do and knowing whether it did it well. AI is rapidly automating the middle of that process. It is not automating the ends.
We already have a large-scale experiment in what happens when you skip the judgment and go straight from prediction to production. It is called the algorithmic content feed. The feed is full of things you will click on but would not choose if you were being deliberate about your life. Engagement-maximized outcomes that nobody endorses on reflection.
Would you click on it (prediction) versus would you choose it (judgment). That is the difference. And as AI explodes the space of what could be built, the human question that generates all the new work is: which of these things should be built, for whom, in what form, reflecting whose values?
The bottleneck shifts, it does not disappear
Right now, scarcity in the production function lives in engineering capacity. That is what is expensive and that is where labor demand concentrates. When AI makes that capacity abundant, scarcity migrates. It moves to the input side: specification. What should we build and why? And it moves to the evaluation side: did we build the right thing, is it working, does it serve the people it is supposed to serve?
Specification and evaluation are not new categories of work. They are already recognized as bottlenecks in every software organization, every law firm, every consulting engagement. What changes is their proportion. They go from being a small fraction of the total effort to being the dominant activity, the place where most of the time, talent, and resources concentrate.
This is a story about the nature of work shifting from "can we build this?" to "should we build this?" The second question is harder, more human, and generates more work than the first.
The transition problem is real. Don't confuse it for the destination.
This is where I want to engage directly with what Amodei is saying, because I think he is making the most serious version of the counterargument. His concern is not that there won't be new work. It is that AI affects a broader range of human abilities than previous technologies and moves faster than any disruption we have experienced. Previous transitions played out over decades. This one could compress into years. The gap between displacement and reabsorption could be wider and more painful than anything in recent history.
I think he may be right about that. And it is worth taking seriously.
But there is a critical difference between a transition problem and a destination problem. A transition problem means the path is painful, but the direction is toward more work, more complexity, more opportunity. A destination problem means the work itself is disappearing permanently. Citrini's "2028 Global Intelligence Crisis" describes a destination problem, a displacement spiral with no natural brake. Amodei, to his credit, is describing a transition problem, the real pain that requires real policy responses, but not a permanent collapse.
The tractor analogy holds. Most farmers did something else, and civilization became dramatically more complex and better for it. Not painlessly. Not overnight. But expansively. The question was never whether there would be enough work. It was whether we would manage the transition well enough for people to get to it.
That is the question we should be asking now.
The real scarcity
"Human resources" has a double meaning that matters here. The people who do the work are also the people whose preferences define what is worth doing. The food we eat, the music we listen to, the relationships we want, the experiences we seek. These are informed by us, not by machines.
Here is what it looks like when AI makes building cheap. That small business owner whose accounting software never fit her workflow? She describes what she needs, and it gets built. The music teacher whose students learn differently from what the off-the-shelf app assumes? He specifies what he wants and it exists. The first-time founder who couldn't afford a lawyer? She gets legal analysis tailored to her actual situation.
Right now, most people are consumers of tools designed for the median user. You adapt to the tool. When building gets cheap, the tool adapts to you. Your preferences shape the system rather than the system shaping your behavior. That is what agency looks like in practice, the ability to have your needs reflected in the tools and systems you use, rather than contorting your life to fit someone else's design decisions.
The work of making that happen, the specifying, the evaluating, the prioritizing, the curating, is enormous. It is deeply human. And it is just getting started.
The real scarcity is not engineering capacity. It is human judgment about what deserves to exist.