Last year, I got invited to a super-deluxe private resort to deliver a keynote speech to what I assumed would be a hundred or so investment bankers. It was by far the largest fee I had ever been offered for a talk – about half my annual professor’s salary – all to deliver some insight on the subject of “the future of technology”.
I’ve never liked talking about the future. The Q&A sessions always end up more like parlor games, where I’m asked to opine on the latest technology buzzwords as if they were ticker symbols for potential investments: blockchain, 3D printing, Crispr. The audiences are rarely interested in learning about these technologies or their potential impacts beyond the binary choice of whether or not to invest in them. But money talks, so I took the gig.
After I arrived, I was ushered into what I thought was the green room. But instead of being wired with a microphone or taken to a stage, I just sat there at a plain round table as my audience was brought to me: five super-wealthy guys – yes, all men – from the upper echelon of the hedge fund world. After a bit of small talk, I realized they had no interest in the information I had prepared about the future of technology. They had come with questions of their own.
They started out innocuously enough. Ethereum or bitcoin? Is quantum computing a real thing? Slowly but surely, however, they edged into their real topics of concern.
Which region will be less affected by the coming climate crisis: New Zealand or Alaska? Is Google really building Ray Kurzweil a home for his brain, and will his consciousness live through the transition, or will it die and be reborn as a whole new one? Finally, the CEO of a brokerage house explained that he had nearly completed building his own underground bunker system and asked: “How do I maintain authority over my security force after the Event?”
The Event. That was their euphemism for the environmental collapse, social unrest, nuclear explosion, unstoppable virus, or Mr Robot hack that takes everything down.
This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader? The billionaires considered using special combination locks on the food supply that only they knew. Or making guards wear disciplinary collars of some kind in return for their survival. Or maybe building robots to serve as guards and workers – if that technology could be developed in time.
That’s when it hit me: at least as far as these gentlemen were concerned, this was a talk about the future of technology. Taking their cue from Elon Musk colonizing Mars, Peter Thiel reversing the ageing process, or Sam Altman and Ray Kurzweil uploading their minds into supercomputers, they were preparing for a digital future that had a whole lot less to do with making the world a better place than it did with transcending the human condition altogether and insulating themselves from a very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic, and resource depletion. For them, the future of technology is really about just one thing: escape.
There’s nothing wrong with madly optimistic appraisals of how technology might benefit human society. But the current drive for a post-human utopia is something else. It’s less a vision for the wholesale migration of humanity to a new a state of being than a quest to transcend all that is human: the body, interdependence, compassion, vulnerability, and complexity. As technology philosophers have been pointing out for years, now, the transhumanist vision too easily reduces all of reality to data, concluding that “humans are nothing but information-processing objects”.
It’s a reduction of human evolution to a video game that someone wins by finding the escape hatch and then letting a few of his BFFs come along for the ride. Will it be Musk, Bezos, Thiel … Zuckerberg? These billionaires are the presumptive winners of the digital economy – the same survival-of-the-fittest business landscape that’s fueling most of this speculation to begin with.
Of course, it wasn’t always this way. There was a brief moment, in the early 1990s, when the digital future felt open-ended and up for our invention. Technology was becoming a playground for the counterculture, who saw in it the opportunity to create a more inclusive, distributed, and pro-human future. But established business interests only saw new potentials for the same old extraction, and too many technologists were seduced by unicorn IPOs. Digital futures became understood more like stock futures or cotton futures – something to predict and make bets on. So nearly every speech, article, study, documentary, or white paper was seen as relevant only insofar as it pointed to a ticker symbol. The future became less a thing we create through our present-day choices or hopes for humankind than a predestined scenario we bet on with our venture capital but arrive at passively.
This freed everyone from the moral implications of their activities. Technology development became less a story of collective flourishing than personal survival. Worse, as I learned, to call attention to any of this was to unintentionally cast oneself as an enemy of the market or an anti-technology curmudgeon.
So instead of considering the practical ethics of impoverishing and exploiting the many in the name of the few, most academics, journalists, and science fiction writers instead considered much more abstract and fanciful conundrums: is it fair for a stock trader to use smart drugs? Should children get implants for foreign languages? Do we want autonomous vehicles to prioritize the lives of pedestrians over those of its passengers? Should the first Mars colonies be run as democracies? Does changing my DNA undermine my identity? Should robots have rights?