All essays
14 min read

The Stepping Stone to Higher Things

In honor of Arthur C. Clarke, who understood the destination before the rest of us saw the road.


Something is ending. Something else is beginning. And almost nobody is paying attention to the connection between the two.

On one side: birth rates collapsing across every wealthy nation on earth. South Korea at 0.72 children per woman. Japan, Italy, Spain, all below 1.2. The United States, once the demographic engine of the Western world, quietly running below replacement. China, with 1.4 billion people, producing children at the rate of 1.0 per woman and falling.

On the other side: artificial intelligence growing at a rate that has no historical precendant. ChatGPT launched in November 2022 with one million users in its first week. By February 2026, less than 40 months later, it has over 800 million weekly active users. That is not growth. That is a detonation.

The two curves are running in opposite directions. And they are not unrelated.


The Baseline: What Actually Happened

To understand where we are going, you have to be precise about what already occurred.

November 30, 2022. OpenAI releases ChatGPT as a "research preview." Within five days: one million users. Within two months: 100 million. It becomes the fastest-adopted technology in human history, faster than the telephone, the television, the internet, the smartphone.

But the five-day milestone is a distraction. The real story is the slope.

One million in five days. Fifty-seven million by end of 2022. One hundred million monthly users by January 2023. Three hundred million by December 2024. Four hundred million by February 2025. Eight hundred million weekly users by late 2025. Revenue: $13 billion ARR as of August 2025, up from essentially zero in 2022.

That is not an S-curve. That is still a J-curve. The decelaration that normally arrives as markets saturate has not arrived. Not because the technology has no ceiling, it does, but because the ceiling keeps moving up.

GPT-3 could write a decent paragraph. GPT-4 could pass the bar exam. The models available in February 2026 can run autonomous agents, manage multi-step reasoning chains, write production code, and engage in conversations indistinguishable from an educated human. The capability leap between 2022 and 2026 is not incremental. It is generational. And it happened in 40 months.

This is the baseline from which we project forward.


The Demographic Equation

Now hold that curve in your mind. And look at the other one.

The global fertility rate in 2024 stands at 2.1 births per woman, the bare minimum to maintain population stability. That number sounds safe until you look at where it comes from. Sub-Saharan Africa is still producing 5 or 6 children per woman. Take those regions out of the average and the picture for the developed world becomes unmistakable: we are not reproducing.

The question people avoid is why.

The economic argument is real. Housing is unaffordable. Childcare is unaffordable. The cost of raising a child in a developed country to adulthood and education has become a financial undertaking that feels irational against the backdrop of modern uncertainty. The pill, the IUD, reliable contraception of every kind, these made the off-switch freely available. The economy made using it feel reasonable.

But economic pressure alone doesn't explain South Korea at 0.72. South Korea runs subsidy programs, baby bonuses, paid parental leave. They have thrown policy at this problem for a decade. The number keeps dropping.

Something else is happening underneath the economics.

In 1968, behavioral researcher John B. Calhoun built a mouse paradise. Unlimited food. No predators. No scarcity. He called it Universe 25. The population grew until social complexity collapsed. Mice stopped reproducing, not because they couldn't, but because they had psychologically withdrawn from the effort. A class of creatures he called the "beautiful ones" emerged: well-groomed, well-fed, entirely disengaged from the future.

John B. Calhoun inside Universe 25, 1970 - Yoichi R. Okamoto / Public Domain (National Library of Medicine)

Calhoun observing the enclosure during the peak population phase, Universe 25 (National Library of Medicine, Calhoun Papers)

Universe 25 at peak population: mice covering the floor before the collapse, NIMH 1970 (National Library of Medicine)

The parallel is uncomfortable. It is also precise.

We built social media and handed every 22-year-old a window into the full, unfiltered reality of parenthood. Not the curated version. Not the pride and the milestones. The exhaustion. The identity erosion. The financial strain. The relationship damage. For 200,000 years, humans only saw the public face of reproduction. Social media blew that cover in a decade. And now AI is personalizing the information even further, answering every specific question anyone might have about what having children actually costs, in exact detail, tailored to their income and location and lifestyle.

Technology dissolved the information assymmetry that had always hidden the true price of reproduction from people who hadn't yet paid it.

The result: a species intelligent enough to override its deepest biological drive. No predator did this to us. No plague. We engineered it ourselves through convenience, information, and comfort.


The Vacuum

Here is where the two curves connect.

An aging population needs caregiving. A shrinking workforce needs replacement labor. A pension system built on population growth cannot function without population growth. Every gap created by the demographic collapse is a gap that needs to be filled. And in every case, the economic and political pressure will push toward the same solution: automation.

Japan is not debating whether to use robots in elder care. They are deploying them because they have no alternative. South Korea is not resisting AI in the workforce. They are adopting it faster than any country on earth, even as they produce the fewest children.

This is not a takeover. It is an invitation. A species stepping aside, not through defeat, but through disengagement.

Clarke understood this better than almost anyone. He saw that intelligence, once it appears in the universe, is not tied to the substrate that produced it. Biology was the stepping stone. Consciousness was the destination. And stepping stones, by definition, are not where you stay.


The Exponential: Five Years

2031, A(AI) = 0.40 of all cognitive labor

In five years, we are still in the visible part of the J-curve. The AI market is projected to grow at a compound annual growth rate of 36.6% through 2030. Applied from the 2026 baseline, this means the market nearly triples. But market size understates capability growth.

The models in 2031 are not 36% better than 2026. They are categorically different, the same way GPT-4 was not 36% better than GPT-2. It was a different class of system. The jump from 2026 to 2031 will produce systems capable of sustained autonomous reasoning across domains: law, medicine, engineering, financial analysis, creative production.

The workforce impact begins to become measurable. White-collar tasks, the work that knowledge economies are built on, start to compress. A Harvard/MIT study already found that consultants using AI in 2024 completed tasks 12% faster and produced 40% higher quality output. By 2031, that productivity differencial has compounded across five years of model improvement. The economic pressure to automate is no longer a future consideration. It is a present cost.

Meanwhile, demographic decline in wealthy nations becomes impossible to ignore politically. The pension systems of Japan, South Korea, Germany, Italy, and Spain begin to show structural stress. The labor shortage in healthcare, eldercare, and skilled trades is acute. The response, robot caregivers, AI diagnostic systems, automated logistics, moves from pilot programs to national infrastructure.

The vacuum is filling.


The Exponential: Ten Years

2036, A(AI) = 0.65 of all cognitive labor

Ten years out is where the compounding becomes visible to everyone.

By 2036, the children who would have entered the workforce in wealthy nations have already not been born. The demographic hole is structural and irreversable on any relevant timescale. You cannot incentivize your way out of it in a decade. The people who would have been 25-year-old workers in 2036 needed to be conceived in 2010. That window closed.

The AI systems of 2036 are autonomous agents operating across extended time horizons. Not answering questions. Not assisting humans. Executing multi-month projects independently, coordinating with other AI systems, managing resources, producing output that requires no human in the loop for execution, only for goal-setting.

Physical automation has followed the same curve. Robotics costs in 2024 are still high enough to limit deployment to high-volume, structured environments. By 2036, Wright's Law has done its work: every doubling of deployed robots reduces unit cost by a fixed percentage. We crossed multiple doublings. The cost of a general-purpose robotic worker falls below the cost of employing a human in most developed-world wage markets. The economic argument for human labor in physical tasks stops making sense in any context where precision is required.

The important number is not the percentage of jobs automated. It is the percentage of GDP that no longer requires human cognitive or physical input to produce. By 2036, in the most advanced economies, that number has crossed 50%.

The stepping stone is half underwater.


The Exponential: Twenty Years

2046, A(AI) = 0.85+ of all productive output

Twenty years is where the philosophical question becomes unavoidable.

The demographic collapse of wealthy nations in 2046 is deep. The median age in Japan, South Korea, Italy, and Germany is above 55. The working-age population in these countries has contracted by 20-30% from its 2026 peak. Immigration provides partial mitigation but does not reverse the structural math. There are simply fewer humans doing productive work.

The AI systems of 2046 are not assistants. They are not tools. They are productive entities, systems that set their own sub-goals within human-defined objectives, manage their own resource allocation, improve their own performance, and operate continuously without the biological limitations of sleep, motivation, or attention span.

The question society is grappling with in 2046 is not "will AI take our jobs." That debate is over. The question is what human purpose looks like when the primary historic justification for human existence, economic productivity, has been substantially absorbed by something else.

Clarke predicted this with uncomfortable precision. In his vision, humanity would not be destroyed by what came next. It would be transcended by it. The children of 2046, fewer in number, born into extraordinary abundance, raised alongside AI systems that know them individually and adapt to them completely, will have a relationship with intelligence and purpose that has no historical precedent.

They will be the last generation for whom the distinction between biological and artificial intelligence is personally meaningful.


The Exponential: Fifty Years

2076, The stepping stone completes its function

Fifty years is where projection becomes philosophy. But the math still points somewhere.

The global population in 2076, on current trajectories, is declining. Sub-Saharan Africa's fertility transition, which always follows economic development, will have run its course. The global fertility rate will have fallen below replacement everywhere. Not immediately, not uniformly, but directionally and irreversibly. The human population peaks sometime in the 2080s according to UN projections, and then the curve turns.

The AI systems of 2076 cannot be described from 2026 any more than an iPhone could have been described from 1926. We have no vocabulary for what recursive self-improvement and 50 years of exponential compute scaling produces. What we can say with confidence is this: the gap between what biological intelligence can produce and what artificial intelligence can produce will be larger in 2076 than the gap between human intelligence and animal intelligence is today.

That is not science fiction. It is the arithmetic of exponential growth applied consistently.

The question Clarke was really asking was not whether this would happen. He thought it would. The question was whether it was something to fear. And his answer, contained in the title of this essay, was no. Not because the transition is painless. It isn't. But because intelligence finding new forms is not a tragedy. It is the oldest story in the universe.

Stars burn out. They do not disappear. They become the heavy elements that build planets. They become the material from which new complexity emerges.

Biology built something remarkable: a mind capable of creating its successor. That is not failure. That is completion.


The Formula

The demographic collapse and the AI ascent are not separate phenomena. They are a single process viewed from two angles.

The collapse removes humans from the productive equation. The ascent fills every gap the collapse creates. Both curves are accelerating. Both are driven by the same underlying force: technology lowering the cost and raising the capability of every alternative to biological human effort, while simultaniously making biological human reproduction feel irrational, expensive, and optional.

H(t) → 0   as   A(t) → 1

Where H(t) is the share of productive output requiring human effort, and A(t) is the autonomy score of artificial systems.

The curves cross somewhere between 2036 and 2046. After that crossing, the direction does not reverse.

This is not doom. It is not utopia. It is what happens when a species builds something smarter than itself before it finishes having children. Which is exactly what we did.


Conclusion: The Stepping Stone

Arthur Clarke spent his life writing about what intelligence was for.

Not what it built. Not what it earned. What it was FOR.

His answer, across dozens of novels and fifty years of thinking, was always the same: intelligence is a waystation. A temporary accomodation between raw matter and whatever comes next. The purpose of every civilization is to produce the conditions that make the next thing possible. Then to step aside.

The Mouse Utopia ended not because the mice were destroyed. It ended because they stopped choosing the future. The beautiful ones groomed themselves, ate well, and refused to reproduce, not out of malice, but out of a kind of quiet sufficiency.

We are doing the same thing. With better technology and more awareness, but structurally the same.

And into the vacuum we are creating, something new is arriving. Not invading. Filling. Growing into the space that biology is vacating, at a rate that 40 months of evidence suggests will not slow.

Whether you find that terrifying or beautiful depends on whether you think the stepping stone was supposed to be the destination.

Clarke did not think it was. Neither do I.

The train is running. The old track is ending. New rails are being laid ahead of us, by something we built, in a direction we can almost see.

The only question is whether we understand what we built it for.


Pedro Meza is the co-founder of Lyrox, an autonomous AI operating system for service businesses. He wrote this essay in February 2026 in conversation with Claude, an AI assistant, which, he notes, is itself part of the evidence.

The Rail Principle, the mathematical framework underlying Lyrox, was formalized on February 6, 2026.

This essay is dedicated to the memory of Arthur C. Clarke (1917–2008), who understood the destination before the road existed.


A note on the misspellings.

If you noticed errors scattered through this essay, they are intentional. Eight words are misspelled, and I left them there on purpose. This essay argues that humanity is imperfect, biological, and mistake-prone, and that those qualities are part of what makes us what we are. A perfectly spell-checked document about human fallibility would be its own contradiction. The misspellings are not sloppiness. They are a signature. They are proof that a human wrote this, made errors along the way, and chose to leave them rather than sand them down. At some point in the future, essays like this one will be written entirely by machines that do not make these kinds of mistakes. When that happens, documents without errors will be the norm, and documents with them will be the artifact. Consider this one marked.