I've been fretting about the coming AI revolution for a decade now. It started when I realized that the biggest threat to the human body was going to be not climate change or political turmoil but the persistent human weakness for tech wizardry. In 2014 there were only 6 people in the world paid full-time to try to prevent AI from wiping out humans (according to AI researcher Nick Bostrom). That year I did my TED talk on The Erotic Crisis about my fears. But then finally journalists started asking what I thought were the right questions; not "will AI kill us?" but "what effect AI will have on human flourishing?" So I felt I could stop obsessing about it and return to artmaking.
![]() |
Self portrait in my AI-generated studio |
Now we're faced with the game-changing appearance of AI generators, which "create" brand new text or images from human prompts. The results are eerie and downright frightening (as when Bing's Sydney insists that Kevin Roose loved it instead of his wife!) Will humans be obsolete now? Not yet, but it is looking more dire for us every day.
The third will be a Political one, when a certain large portion of the population reacts against the strangeness of a newly unfathomable world where humans have lost control. They will blame whomever they hate most (immigrants, Democrats, techies) and, fueled by social media wildfires, launch a war against the perceived perpetrators, regardless of facts. When AI becomes misaligned with "human values" will anyone anywhere be able to tell source?
AI is being developed without controls by competitors for an unbelievably huge prize, a recipe for certain destruction. Even if all parties know it's a race to doom, every one of them will rather be first than see the other guy win. This is fixed human nature, I'm afraid. Since the capitalist market is now our god, greed will be our downfall. In such an environment, AI will steadily grow in capacity while humans will only defend those places where we can see our own weakness. AI will overtake humans not in areas we imagine, but the places we never thought of, since it will operate in ways that never occurred to us! It won't be until after the takeover, if ever, that humans will finally see where our weaknesses actually lie. More likely, we'll never know how we lost that battle. That's of course too late.
I don't see any remedy, other than a full stop, which Yudkowsky recommends. More skepticism and more regulation placed on AI will help slow the crisis. But like Narcissus, we might just die transfixed with the image we see in our reflection.
I hope we can all apply our humanity to this problem. We need us all. In the meantime, take great refuge in your relationships. Human relationships are what make life worth living. Store up your treasure there!
1 comment:
A friend read this and asked some great questions. With her permission, I include them here:
1. Will any humans leave Earth and establish an AI-free colony somewhere?
Yes, I think it inevitable that AI will supercede humans and colonize space. In fact humans will be left behind because we have so many pesky needs that are nearly impossible to address individually in space (clean water) and entirely impossible to address collectively (all biological, physical, psychic, social, etc. needs at 100%!) AI will be the perfect capitalist mechanism for utilizing off-earth assets into first wealth and then the ultimate wealth, which is energy.
2. Will humans have any lasting value at all to AGI?
The BIG question. I hope we will be able to convince the AGI of our value. My best hope is that we rescue ourselves by providing either or both of these:
• Creative thinking. AGI will be super-capable but will always lack real creative thinking. That's not to say there couldn't be a simulation of creativity that is near enough to human-level that humans become superfluous, but there is a real hope there because the human perspective, rooted in biology, is inherently impossible to replicate in code.
• Purpose. It seems to me that no matter how capable a planet-hopping and star-harvesting AGI becomes, it cannot develop its own native purpose. It might be programmed to colonize everything, or to maximize energy or some such, but other than pre-programmed self-survival, it cannot inherently invent an ontological purpose. This assumes of course that humans know what is relevant. It very well could be there is a mineral-level of conscious reality that AGI could uncover that we earthlings know nothing about. In that case AGI will become like the early lifeforms we know, just a touch above inanimate. After all, how do we know that minerals don't have their own kind of religion?
Post a Comment