Against the Singularity
The technological singularity is the rapture of the rationalists. A comforting eschatology dressed in the language of exponential curves.
The singularity narrative goes like this: at some point, we build an AI smarter than us. That AI builds a smarter one. The smarter one builds an even smarter one. The process accelerates until intelligence reaches a point beyond human comprehension. Everything changes. The future becomes unpredictable. We either transcend or perish.
It's a compelling story. It's also, I believe, fundamentally wrong — not because AI won't become very powerful, but because the singularity concept mistakes the nature of intelligence, the dynamics of progress, and the role of physical constraints.
Intelligence Is Not One-Dimensional
The singularity narrative treats intelligence as a single axis — a scalar quantity that can be increased without limit. More intelligence equals more capability equals faster progress. But intelligence isn't one thing. It's a bundle of capabilities: pattern recognition, causal reasoning, creativity, social modelling, physical intuition, emotional intelligence, and dozens of others.
You can't just "turn up the intelligence dial." There is no intelligence dial. There's a mixing board with hundreds of channels, and the optimal mix depends entirely on the task.
An AI that's superhuman at protein folding is not superhuman at diplomacy. An AI that writes better poetry than Shakespeare cannot necessarily design a better bridge. Improving one capability doesn't automatically improve others, and the notion of "generally smarter" is far less coherent than it sounds.
Physical Constraints Don't Disappear
Even if you had unlimited intelligence, you'd still face physical constraints. You can't think your way past the speed of light. You can't reason your way to unlimited energy. You can't compute your way around the second law of thermodynamics.
Many of the problems we want AI to solve are not intelligence-limited. They're resource-limited, politically limited, or limited by the fundamental physics of the universe. A superintelligence trying to cure cancer still needs to run experiments, which take time. A superintelligence trying to solve climate change still needs to convince seven billion people to cooperate, which takes politics.
The Returns Are Not Exponential
The singularity assumes that intelligence improvements compound exponentially. But there's strong reason to think returns diminish. The first 10% improvement in a model's capability produces dramatic gains. The next 10% produces less. Each increment of progress requires more compute, more data, more clever architecture — and yields less marginal improvement.
We see this pattern everywhere in technology. Moore's Law held for decades and then slowed. Battery energy density improves, but slowly. Solar panel efficiency creeps upward against theoretical limits. The singularity narrative assumes AI is exempt from diminishing returns. Why would it be?
The Sociological Function
Here's what I find most revealing about the singularity: it serves the same psychological function as religious eschatology. It posits a moment of radical transformation — a day when everything changes, when the old rules no longer apply, when the faithful are rewarded and the unprepared are left behind.
The singularity is the rapture for people who think they're too rational for religion. It provides the same comforts: meaning, inevitability, and the promise that the future will be radically different from the mundane present. It even has its own priesthood — AI researchers and futurists who interpret the signs and tell us how close we are.
What's Actually Happening
What's actually happening with AI is interesting enough without the eschatology. We're building increasingly capable tools that will transform work, creativity, science, and warfare. These transformations will be profound, uneven, and slow enough for societies to adapt — badly and with much suffering, as we always do, but adapt nonetheless.
The future will not arrive in a single moment of transcendence. It will arrive the way the future always does: gradually, messily, and in ways that no one — human or artificial — predicted.
That's less exciting than the singularity. It's also more useful as a basis for planning.