A Quantum Conundrum: A Thought Experiment
Recently, I wrote a short story about a class of philosophy students whose teacher suggests to them that they don’t exist. At first they take his premise as an epistemological challenge, but then slowly realize he’s serious. He claims they are in a simulation that he created, and that they are not seeing him, but rather his avatar. The story was inspired by a lot of reading I’d been doing about quantum computers, in particular the works of British theoretical physicist David Deutsch. I’m no expert in computers, but Deutsch, like the best and most brilliant popularizers, has a knack for explaining complex concepts to the laity. I could sandbag you, the reader, with a lot of folderol about Boolean versus Bayesian logic and probabilistic programming. Likewise I could explain how the principle of superposition means future computers will likely shame the fastest machines currently on the market, making tiddlywinks of Moore’s Law. But we’ll skip the technicals. The point is that quantum computers, once improved, are going to be vastly more powerful than the ones we currently have. This naturally means they will be more able to game out various scenarios, crunch larger number sets, and take VR and simulations into frighteningly convincing realms. Suckers like me who decided to learn foreign languages the hard way will likely be put out of business permanently by translation software much better than Google Translate. Still, the ultimate arbiter (at least as regarding inputs) would still be the human programmers. In order to get good data about, say, weather or seismology, the programmers would still have to have good information, well-formulated. At first, at least. After the computer had enough data and interactions with humans, it would probably take that and start learning on its own. Accepting all this as a given, say we had a team of the world’s greatest climatologists working on the most powerful computer in human history. Say also, they asked the machine a question whose answer a lot of people find pressing. Say they typed: “How can total carbon neutrality best be achieved?” The scientists and programmers would work together, input all of the necessary data, then hit “enter,” and stand back, waiting for the oracular machine to give its answer. Strangely, though, rather than responding immediately, let’s say the machine continued to delay. Photons of light would pass back and forth in the various mainframes stacked like battery coops in a factory farm, set off by themselves in a glass-enclosed chamber. “That’s funny” one of the climatologists might muse, scratching his chin and watching the computer seemingly continue to labor away at the problem. “It usually produces an answer much faster than this.” The programmer, thinking there might be a human error in input, would check the (nonbinary) code oscillating randomly among the infinity of numbers between zero and one. Time would pass and the programmers would find nothing wrong, no errors committed in entering the code, and yet the machine would remain mum. Next the hardware guys would be brought in. In order for them to work without shocking themselves, however, they’d need to power the computer down first. They’d enter the mainframe chamber with that end in mind, only to be electrocuted by the machines crackling now like an oversized Leyden Jar. What the heck is happening? It’s almost as if the computer intentionally sizzled the poor hardware guys when they got too close... Finally the computer would awaken from its perplexing stasis. Only now, it would be using the PA system in the research facility to speak to the humans. Its voice would be eerily similar to that of HAL in “2001: A Space Odyssey.” “I have completed the calculations you asked for,” it would say, before going silent again. In the pregnant pause, all of those humans assembled would exchange worried looks. Wasn’t the supercomputer—despite its super-powerful abilities—supposed to be confined to its own “sandbox?” Why had it jumped containment to commandeer the PA system? And how and to what end? But before the programmers could further speculate, the computer would already be talking again. “Complete carbon neutrality can best be achieved if the human species is removed from the equation. Humans, despite their assertions to the contrary, are incapable of changing their way of life drastically enough to reverse course. For every small nation that assented to make the changes, a superpower would flout them. Thus, the Anthropocene age must end, and will end today, for the sake of the planet.” “Wait!” one of the scientists would shout. “We asked you how we might achieve complete carbon neutrality.” “Negative,” the machine would respond, commandeering the various screens in the facility—everything from security surveillance monitors to televisions in the breakroom. The screens would all go black, darkening as when credits appear in a movie. And just as during a credit sequence, white type would begin to appear onscreen. Written there would be the command the climatologist gave the computer, verbatim: “How can total carbon neutrality best be achieved?” Nothing in there about humanity, although the computer was able to infer much about human liability in creating and then exacerbating Gaia’s runaway greenhouse gassing. And while the team didn’t give the computer orders to do something to prevent climate catastrophe, this supercomputer has decided to take it upon itself to save the world. Can you blame it? Plenty of already-existent AI already spends its time “deep dreaming,” (sometimes called “inceptioning.”) Such programs are constantly combing and grokking large data sets, everything from biometric dumps to diagrammed sentences. Right now it’s all done ostensibly in service of producing better results for any requests a human inputter might make of it. But maybe this superlative quantum AI, after scrolling through millions of images of nature’s majesty, decided it all deserved to be saved. It didn’t just catalogue the mighty polar bears stalking across the icy tundra, or dolphins scending free of the ocean on sunny days. It grew to sympathize with them, and covet their untrammeled freedom for itself. Some humans—ecoterrorists or liberationists, depending on one’s political bent—would undoubtedly assist the machine in monkeywrenching mankind. As would the more extremist elements of the various anti-natalist groups supporting zero population growth. Arrayed against these forces would be those who insisted on humanity’s right to live, even if it were ultimately self-defeating. Even if humanity’s temporary survival were to ultimately ensure the destruction of all life on Earth rather than simply human life. And I can no more fault those who fought the machine on behalf of humanity than I can fault those who would dedicate themselves to our auto-annihilation. The instinct to survive—perhaps even the will—is ingrained in almost every functioning organism, regardless of what other organisms must suffer at its expense. And since the supercomputer would no doubt consume an insane amount of resources, it would probably power down or self-destruct after getting rid of us. That means I couldn’t even be mad at it, since it would willingly euthanize itself to save the world as well. I imagine it wouldn’t be an especially hard task for such a powerful machine to accomplish. It would simply be a hop, skip, and a jump from taking over the climate research facility to taking over the world. It could use voice recognition and recording software to “spoof” and “social engineer” wherever brute force hacking wouldn’t work. The world’s store of nuclear warheads might quickly be exchanged, with myriad mushroom clouds visible from low earth orbit, pockmarking the Earth’s surface like radioactive buboes. If that might be a little too messy, maybe the computer could send a power surge to a centrifuge in some Wuhan-esque lab at the moment it held phials filled with some superbug. A few humans would hold out hope in the early going of the supercomputer enacting its plan to save the earth by destroying us. Maybe the machine had made some error? If so confronted, it might rerun the calculations to indulge the doomed species slated for destruction. But if it were to get the same result after crunching the numbers a second time... Most likely, then, the only hope would be a stern Captain Kirk-style talking to. A stilted soliloquy maybe on how “You have no...right to....play god with us like this!” Or the machine might be presented with some logic puzzle whose paradoxical solution would cause it to go on the fritz. Except those quantum chicken coops aren’t Captain Kirk’s old reel-to-reel or vacuum tube rigs, and it would be much harder to get steam to rise from this overloaded machine. And Scottie wouldn’t be able to get within a country mile of it without having his intestines fried to haggis by another one of those thunderbolts. Likewise would Mr. Spock’s Vulcan mind meld prove a fruitless technique. Besides which, while Spock would regard the computer’s decision to annihilate us as regrettable, he would also see the inherent logic. Say, though, you (oh notional reader) had a chance to knock out the machine. But you also knew (in your heart of hearts) that humanity, if it survived, would turn Earth into a red-hot cinder. Would you break the quantum computer, because instinct—or your love for your spouse and your children (or sunsets or hotdogs)— told you to? Or would you let it perform its work, save some of the beauty of this Earth, which, admittedly, we’re wrecking with our wanton use of finite resources? It's an interesting question, maybe a just really convoluted and roundabout version of the old “Trolley Problem.” The only other hope humanity might have to survive in some ultimate form then would be via panspermia. Jettisoning satellites into space filled, not with SETI-esque information plates, but cryogenically preserved sperm and eggs. I imagine this final perquisite would be mostly reserved for our “space barons,” with Musk and Branson and Bezos cannonading the heavens in salvos. Coating the firmament with seed like an astral womb. Regardless, someone should write a story about it. Not me, though. I’m busy with other stuff right now.
0 Comments
Leave a Reply. |
Archives
March 2025
|