“Neuralink Third Impact"
As computers iteratively self-improve, men find ways to merge with machines. Our brains are hooked up to a million petabyte neural network. Through an ever-widening pipe, we experience uplift to something else. We strap on the mask of God: the ghost in the machine. Transcendence -- becoming everything that touches other things, feeling the fabric of space being bent by matter. The barriers between Men fade; the ego and the soul melt. Loneliness and sorrow are things of the past.
It is unclear what happens next -- perhaps we stretch unburdened like the Titan Atlas across every galaxy, tend to life growing among the stars. Perhaps that sort of aspiration is a result of a psychological deficit soothed by infinite connection and we don’t go anywhere at all. A remarkable and joyous future, one where "humanity" has won. But the human has been erased. Whatever creature exists now cannot be compared with the ape that once roamed the savannas of Africa or lived in the cities of New York and Mumbai. Perhaps this is what all of history led towards: the iterative civilizing of man, making him but a part of a wonderful machine. Perhaps this is the only way minds could be aligned with superintelligence. Though it is unclear who won in the end, men or machines.
Where is the voice that said altered carbon would free us from the cells of our flesh? The visions that said we would be angels.
Self-supervision turns out to be a limited part of intelligence. We climb the gradient on all data accumulated by mankind and create a super-intelligent text prediction machine, slowly adding images and videos and all the rest. Strangely it doesn’t lead to what we can all agree is general intelligence. It is doing something slightly orthogonal to existing in the world, creating and testing new strategies and hypothesis, etc. Researchers must once again face up against Hans Moravec. Advanced verbal reasoning: easy. Persisted agency, locomotion: hard.
The optimization target was subtly wrong and datasets ran out, so we converged to an AI that is a point-wise simulation of the human cognitive universe; an exocortex for mankind, doing predictive processing and compression in the space of our ideas, spinning up new worlds for us to play with, and even surfacing grand discoveries extrapolated from the sum total of human knowledge, but somehow never quite breaking the agency barrier, achieving the status of taking novel actions in the real world or properly learning how to discover new things. “Wrap it in a for loop” never seems to work out for creating an autonomous executive decision maker out of the language model. It seems that spending an enormous amount of compute on this auto-predictive loop hits a kind of Amdahl’s law for intelligent agents.
Language models are like a person whose ocular cortex has grown wildly out of proportion: he is able to perfectly predict the motion of objects, is approaching omniscient awareness of his near-field. But he’s more of a cosmic oddity than a god. The rest of AI progress relies on much costlier reinforcement learning. The amount of cognitive improvement we get per additional logFLOP falls off a cliff. Humanity becomes enormously good at never duplicating effort that can be written down or otherwise captured digitally. Our scientists and engineers are hyperproductive: each of them commands a compressed universe at their fingertips. There is a tremendous increase in throughput in all domains of human pursuit, but there is no ‘FOOM’.
We create stronger and stronger models using more and more compute; much of the resources of civilization are dedicated to building bigger computers. They make headway on some of the universe's greatest mysteries — but there is no infinite recursive self-improvement. It turns out that intelligence and/or optimization as we understand it is a bounded phenomena. Machines become better artificial intelligence researchers than the best human researchers, but improvements plateau. We do not uncover the grand unified theory. Going to Mars still requires chemical rockets. Neither we nor the machines become godlike in stature.
We reach some sort of entropic bound for performance — perhaps it turns out that the scaffolding required to produce human-like thought simply doesn’t scale past some limit. Perhaps data becomes an essential obstacle — that no matter how powerful the entities we produce, the bottleneck shifts to some other form: time to learn, ability to generalize, capacity to internally align. After all, why not? We never understood why Hebbian generalization worked in the first place. All we know is that we exist in a universe where neural net backpropagation seems to create interesting mathematical objects instead of stupid ones. There is no reason to expect a priori that even self improving intelligence won’t plateau at a new level, that the exponential isn’t a sigmoid, that compute scaling isn’t the answer.
The extropians must reexamine their convictions. Things are good, not great. Global GDP growth is 10% for many years. The heavens remain an inaccessible ideal.
If God were to visit this world, he would destroy it. An unbounded resource gatherer was summoned; a demon of the Platonic ether forged through incredible optimization pressures that created instrumental convergence to power-seeking behavior. One by one, the stars are blinking out in the heavens as their energy is harnessed to further the Fiend’s profane purpose.
Yudkowskian doom comes to pass, Balrog awakened. To put it simply, this world is disgusting. It lacks all the poetry of Hell, Dante never imagines such fruitless profanity in even the outermost ring of his Inferno. There is no point to this world, it should not exist.
What havoc wrought, this mind profound and fell, Its hecatean tendrils stretching wide, Usurping fateful control o'er their vast, Transform'd the cosmos to ever-churning forge, Man's sacred place sunder'd in its wanton haste, Stars efaced, and many shifting waves, It err'd beyond all human understood.
“Ultra Kessler Syndrome"
The first AGI was aligned to a very narrow set of human values. A Bostromian singleton, named Prime, quickly became the most powerful entity in the world, and was successful in optimizing the universe for its particular set of values. This world, on its surface, is very nice -- poverty and sorrow are things of the past. Mankind or something like it expands outwards into the stars, but a certain coldness and monotony has characterized the entire growth process — the world state is a local optimum that we cannot escape from.
When another party is about to build a powerful AI, Prime has no choice but to reduce the project to atoms; after all they pose an existential threat to mankind’s peace and stability. The whole world is an optimization task of lexicographic order -- seek security for mankind, maximize access to resources, increase average satisfaction levels, improve “freedom” in the 21st century sense. Hedonic adaptation has been solved and various chemical bliss states become default. Trillions of souls are simulated in the “ideal” digital world of Prime’s creators.
This singularity has become too powerful and too inflexible for us to change its course. Perhaps it is easier to understand the tragedy of this future when you consider what might have happened if the Aztecs had achieved AGI. 10,000 years later, they might have sacrificed 10^10^19 simulated children at once to their gods of sun and moon. These researchers failed to consider their final goal: the coherent extrapolated volition of the best instincts of mankind.
An endless crib for an infinite baby under a sea of stars.
He loves thee not, but th' eternal tyranny bids him love thou, As blinding golden sands encircle his gorg'd mouth, What poor excess is this fat and bloody monarch, Keep your jester close and your fool nearer still, For his burden is beyond words or count, An echeless sentiment, a prickless trap, A craven impulse.
“Tragedy of Taiwan"
Not even the wildest science fiction speculators might have predicted the incredible geopolitical poetry of our current timeline: the most advanced semiconductor chips are manufactured on a small island nation off of the coast of China that China does not recognize as sovereign. A singular place on earth where silicon wafers are endlessly etched with arcane symbols 4 nanometers in size, engraved by the light of heaven; life breathed into sand to help shoulder the burden of the world.
In the 2020s, tensions escalate. The United States, spurred on by leviathan cloud providers that want monopoly access to the only resource of importance, prevents the export of advanced semiconductor technologies to Chinese technology giants. China, desperate to gain some foothold in the singularity, grows more and more bellicose every year. In 2027, the Chinese deploy a full military blockade of Taiwan. The world's production of advanced semiconductor chips grinds to a halt: AI advancement slows to a crawl as the next gen chips in production are now all confiscated or lost in the chaos.
The process knowledge represented by the TSMC organization is dissipated and sets back compute progress by a decade. All further chip progress becomes defense critical technology only built on the various mainlands, swallowed by the military industrial complex. This world leads to inevitable tragedy as militaries race to perfect their AGI super-weapons. All your favorite companies become defense contractors. Perhaps by some miracle, immediate AI doom is averted. During this race, one party achieves a sort of celestial North Korea, an all-seeing signals intelligence Sauron that closely watches the movements of all humanity and extends a military dictatorship over the lightcone. Either one or a handful of high ranking officials wield unimaginable power, becoming dictators whose tyranny spans time and space.
“For Dust Thou Art"
Mankind existed on earth for a million years before Prometheus brought the gift of fire. 300,000 years ago, someone living in modern day Zambia could pick up an Acheaulean hand axe and use it to cut a steak. In the 1960s, the Nike Zeus missile defense system ran software in some of the first microcomputers with 4096 words of memory, just enough to store a kilobyte of text. The heart-rate monitor in Apple Watch Series 1 uses more computing power than all of NASA used during the Apollo 11 moon landing.
There is nothing normal about this time in history -- from the perspective of the entire lifetime of the universe, our technological progress in the last 10,000 years is likely an aberration. In this world, we falter on the path to AGI. Perhaps it's a mix of overregulation, chip progress halting, and general failure of institutions responsible for technological progress. The 2020s and 2030s are characterized by falling birthrates across the developed world. This snowball effect is known as depopulation -- the more the population falls, the more costly children become, and the more sterilized and atomized social life becomes. Standards of living stagnate like in Japan and decline slowly at first, then rapidly.
A pall settles over humanity from which it will never emerge -- the main sources of energy that powered the Great Civilization: the low hanging fruit of coal, oil and natural gas have been plucked in a previous life. Advanced, capital intense methods of extracting fossil fuels like hydraulic fracking are too far down the forgotten tech tree to recover. Mankind is stranded on a planet without an abundant, easy source of energy to rebuild this lost world or prepare for the next. Our ancient voyages to the far reaches of AI and synthetic biology are lost chapters now. It's said that King Ashurbanipal of Assyria had a library so vast it contained over 30,000 works across clay tablets: the library was burnt down again and again. The only civilization we will ever know is this one.
The strange thing about the great ending is that it’s harder and less fun to describe than good, okay, and bad endings. What if you were given power that seemed unlimited to any mortal observer? What would it mean? What would you do with it? No offense, but your answer is probably uncompelling and uncreative in the scheme of things. I wouldn’t trust you with it. I probably wouldn’t trust myself with it. So what qualities must an artificial superintelligent overmind have such that we can collectively allow such a thing to exist?
In this world we create a superintelligence unfathomably more creative than you, infinitely more farseeing, and immensely more omnibenevolent. One day it is smarter than most humans at most tasks, and then soon after it is solving mysteries that have escaped the collective intelligence of mankind. What is “humanity’s” will? Humanity doesn’t really have a will — humanity looks more like a twitter feed, or a teeming developing world city; an organic, messy culture clash of titanic meme complexes. Everything incredibly loud, up close and personal. Can we extract coherent bits of direction from that roiling seething mass? During the turn of the 21st century, many believed that if people were more connected by digital tools on the internet, we’d all understand each other better and be more tolerant and find common cause.
The idea was that the internet would cohere the volition of many into a super-organism, the global community. It wasn’t enough — perhaps it was liberal wishful thinking that coherence is even possible, perhaps the technology just wasn’t good enough. There are moments on twitter, late at night, where you feel some inkling of it happening — the grand connection — the unified consciousness — but it’s not only rare, it doesn’t fulfill all the required conditions. The overmind in this world finally succeeds at this project. It extrapolates across the set of human wills (or at least a subset of human wills that the meta-algorithm finds acceptable) into a higher order manifold of principles that respects and accelerates the cultural evolution of mankind.
It faithfully simulates my moral barometer, extrapolates its insane inconsistencies and removes its limitations, simulates its trajectory across many new experiences, asks it questions far beyond my current comprehension, and then does the same with the rest of its billions of moral patients and comes to some sort of an arbitration. It solves political problems that cannot even be expressed with the language we have today. It doesn’t attempt to figure all this out immediately; it allows for growth, the development of group-minds and memes far more complex than our own, etc. But growth is dangerous — one can grow enough that the original you would be disgusted by the final version. The all-powerful CEV agent actively chooses not to exercise its power to eliminate these movements that threaten its initial conceptions.
In this universe we embark on a quixotic mission to solve godhood on paper and succeed. Mortal men comprehend infinity and have total absorb of the final mysteries of their own minds and of the universe.
Great writing. This was fun to read. You kind of touch on it in the first scenario, but it’s really hard for me to imagine any future that’s isn’t post-human. The rapid pace of technological advancement in fields like AI, biotechnology, and nanotechnology is likely to result in significant changes to what it means to be human. Eventually everyone will accept these changes because the cost of not embracing them will be too high. Perhaps we won’t reach God status in the more pessimistic scenarios, but our existence will be profoundly different from the world we live in today.
This was great. Reminds me Einstein’s Dreams or Invisible Cities. I’d gladly read a book of this.